00:00:00.001 Started by upstream project "autotest-per-patch" build number 132367 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.065 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.067 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.101 Fetching changes from the remote Git repository 00:00:00.104 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.154 Using shallow fetch with depth 1 00:00:00.154 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.154 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.234 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.234 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.938 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.951 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.963 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.963 > git config core.sparsecheckout # timeout=10 00:00:06.975 > git read-tree -mu HEAD # timeout=10 00:00:06.992 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.018 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.018 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.110 [Pipeline] Start of Pipeline 00:00:07.128 [Pipeline] library 00:00:07.130 Loading library shm_lib@master 00:00:07.130 Library shm_lib@master is cached. Copying from home. 00:00:07.148 [Pipeline] node 00:00:07.158 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:07.160 [Pipeline] { 00:00:07.172 [Pipeline] catchError 00:00:07.174 [Pipeline] { 00:00:07.187 [Pipeline] wrap 00:00:07.197 [Pipeline] { 00:00:07.205 [Pipeline] stage 00:00:07.207 [Pipeline] { (Prologue) 00:00:07.222 [Pipeline] echo 00:00:07.223 Node: VM-host-SM16 00:00:07.229 [Pipeline] cleanWs 00:00:07.238 [WS-CLEANUP] Deleting project workspace... 00:00:07.238 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.243 [WS-CLEANUP] done 00:00:07.521 [Pipeline] setCustomBuildProperty 00:00:07.595 [Pipeline] httpRequest 00:00:07.937 [Pipeline] echo 00:00:07.939 Sorcerer 10.211.164.20 is alive 00:00:07.948 [Pipeline] retry 00:00:07.950 [Pipeline] { 00:00:07.965 [Pipeline] httpRequest 00:00:07.969 HttpMethod: GET 00:00:07.970 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.971 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.982 Response Code: HTTP/1.1 200 OK 00:00:07.982 Success: Status code 200 is in the accepted range: 200,404 00:00:07.983 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.497 [Pipeline] } 00:00:12.514 [Pipeline] // retry 00:00:12.521 [Pipeline] sh 00:00:12.802 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.817 [Pipeline] httpRequest 00:00:13.595 [Pipeline] echo 00:00:13.597 Sorcerer 10.211.164.20 is alive 00:00:13.606 [Pipeline] retry 00:00:13.608 [Pipeline] { 00:00:13.623 [Pipeline] httpRequest 00:00:13.628 HttpMethod: GET 00:00:13.629 URL: http://10.211.164.20/packages/spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:00:13.629 Sending request to url: http://10.211.164.20/packages/spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:00:13.650 Response Code: HTTP/1.1 200 OK 00:00:13.651 Success: Status code 200 is in the accepted range: 200,404 00:00:13.651 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:00:52.867 [Pipeline] } 00:00:52.885 [Pipeline] // retry 00:00:52.893 [Pipeline] sh 00:00:53.172 + tar --no-same-owner -xf spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:00:56.464 [Pipeline] sh 00:00:56.746 + git -C spdk log --oneline -n5 00:00:56.746 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:00:56.746 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:00:56.746 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:00:56.746 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:00:56.746 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:00:56.763 [Pipeline] writeFile 00:00:56.781 [Pipeline] sh 00:00:57.059 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:57.071 [Pipeline] sh 00:00:57.350 + cat autorun-spdk.conf 00:00:57.350 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.350 SPDK_TEST_NVMF=1 00:00:57.350 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.350 SPDK_TEST_USDT=1 00:00:57.350 SPDK_TEST_NVMF_MDNS=1 00:00:57.350 SPDK_RUN_UBSAN=1 00:00:57.350 NET_TYPE=virt 00:00:57.350 SPDK_JSONRPC_GO_CLIENT=1 00:00:57.350 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.357 RUN_NIGHTLY=0 00:00:57.359 [Pipeline] } 00:00:57.373 [Pipeline] // stage 00:00:57.391 [Pipeline] stage 00:00:57.393 [Pipeline] { (Run VM) 00:00:57.406 [Pipeline] sh 00:00:57.717 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:57.717 + echo 'Start stage prepare_nvme.sh' 00:00:57.717 Start stage prepare_nvme.sh 00:00:57.717 + [[ -n 6 ]] 00:00:57.717 + disk_prefix=ex6 00:00:57.717 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:00:57.717 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:00:57.717 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:00:57.717 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:57.717 ++ SPDK_TEST_NVMF=1 00:00:57.717 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:57.717 ++ SPDK_TEST_USDT=1 00:00:57.717 ++ SPDK_TEST_NVMF_MDNS=1 00:00:57.717 ++ SPDK_RUN_UBSAN=1 00:00:57.717 ++ NET_TYPE=virt 00:00:57.717 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:57.717 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:57.717 ++ RUN_NIGHTLY=0 00:00:57.717 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:57.717 + nvme_files=() 00:00:57.717 + declare -A nvme_files 00:00:57.717 + backend_dir=/var/lib/libvirt/images/backends 00:00:57.717 + nvme_files['nvme.img']=5G 00:00:57.717 + nvme_files['nvme-cmb.img']=5G 00:00:57.717 + nvme_files['nvme-multi0.img']=4G 00:00:57.717 + nvme_files['nvme-multi1.img']=4G 00:00:57.717 + nvme_files['nvme-multi2.img']=4G 00:00:57.717 + nvme_files['nvme-openstack.img']=8G 00:00:57.717 + nvme_files['nvme-zns.img']=5G 00:00:57.717 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:57.717 + (( SPDK_TEST_FTL == 1 )) 00:00:57.717 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:57.717 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:57.717 + for nvme in "${!nvme_files[@]}" 00:00:57.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:57.717 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.717 + for nvme in "${!nvme_files[@]}" 00:00:57.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:57.717 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.717 + for nvme in "${!nvme_files[@]}" 00:00:57.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:57.717 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:57.717 + for nvme in "${!nvme_files[@]}" 00:00:57.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:57.717 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.717 + for nvme in "${!nvme_files[@]}" 00:00:57.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:57.717 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.717 + for nvme in "${!nvme_files[@]}" 00:00:57.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:57.717 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:57.717 + for nvme in "${!nvme_files[@]}" 00:00:57.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:57.717 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:57.717 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:57.717 + echo 'End stage prepare_nvme.sh' 00:00:57.717 End stage prepare_nvme.sh 00:00:57.729 [Pipeline] sh 00:00:58.008 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:58.008 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:00:58.008 00:00:58.008 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:00:58.008 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:00:58.008 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:58.008 HELP=0 00:00:58.008 DRY_RUN=0 00:00:58.008 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:58.008 NVME_DISKS_TYPE=nvme,nvme, 00:00:58.008 NVME_AUTO_CREATE=0 00:00:58.008 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:58.008 NVME_CMB=,, 00:00:58.008 NVME_PMR=,, 00:00:58.008 NVME_ZNS=,, 00:00:58.008 NVME_MS=,, 00:00:58.008 NVME_FDP=,, 00:00:58.008 SPDK_VAGRANT_DISTRO=fedora39 00:00:58.008 SPDK_VAGRANT_VMCPU=10 00:00:58.008 SPDK_VAGRANT_VMRAM=12288 00:00:58.008 SPDK_VAGRANT_PROVIDER=libvirt 00:00:58.008 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:58.008 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:58.008 SPDK_OPENSTACK_NETWORK=0 00:00:58.008 VAGRANT_PACKAGE_BOX=0 00:00:58.008 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:58.008 FORCE_DISTRO=true 00:00:58.008 VAGRANT_BOX_VERSION= 00:00:58.008 EXTRA_VAGRANTFILES= 00:00:58.008 NIC_MODEL=e1000 00:00:58.008 00:00:58.008 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt' 00:00:58.008 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:01:01.290 Bringing machine 'default' up with 'libvirt' provider... 00:01:01.550 ==> default: Creating image (snapshot of base box volume). 00:01:01.550 ==> default: Creating domain with the following settings... 00:01:01.550 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732092880_23b2e272cbe991e9666a 00:01:01.550 ==> default: -- Domain type: kvm 00:01:01.550 ==> default: -- Cpus: 10 00:01:01.550 ==> default: -- Feature: acpi 00:01:01.550 ==> default: -- Feature: apic 00:01:01.550 ==> default: -- Feature: pae 00:01:01.550 ==> default: -- Memory: 12288M 00:01:01.550 ==> default: -- Memory Backing: hugepages: 00:01:01.550 ==> default: -- Management MAC: 00:01:01.550 ==> default: -- Loader: 00:01:01.550 ==> default: -- Nvram: 00:01:01.550 ==> default: -- Base box: spdk/fedora39 00:01:01.550 ==> default: -- Storage pool: default 00:01:01.550 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732092880_23b2e272cbe991e9666a.img (20G) 00:01:01.550 ==> default: -- Volume Cache: default 00:01:01.550 ==> default: -- Kernel: 00:01:01.550 ==> default: -- Initrd: 00:01:01.550 ==> default: -- Graphics Type: vnc 00:01:01.550 ==> default: -- Graphics Port: -1 00:01:01.550 ==> default: -- Graphics IP: 127.0.0.1 00:01:01.550 ==> default: -- Graphics Password: Not defined 00:01:01.550 ==> default: -- Video Type: cirrus 00:01:01.550 ==> default: -- Video VRAM: 9216 00:01:01.550 ==> default: -- Sound Type: 00:01:01.550 ==> default: -- Keymap: en-us 00:01:01.550 ==> default: -- TPM Path: 00:01:01.550 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:01.550 ==> default: -- Command line args: 00:01:01.550 ==> default: -> value=-device, 00:01:01.550 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:01.550 ==> default: -> value=-drive, 00:01:01.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:01.550 ==> default: -> value=-device, 00:01:01.550 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.550 ==> default: -> value=-device, 00:01:01.550 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:01.550 ==> default: -> value=-drive, 00:01:01.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:01.550 ==> default: -> value=-device, 00:01:01.550 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.550 ==> default: -> value=-drive, 00:01:01.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:01.550 ==> default: -> value=-device, 00:01:01.550 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.550 ==> default: -> value=-drive, 00:01:01.550 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:01.550 ==> default: -> value=-device, 00:01:01.550 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:01.808 ==> default: Creating shared folders metadata... 00:01:01.808 ==> default: Starting domain. 00:01:03.189 ==> default: Waiting for domain to get an IP address... 00:01:21.275 ==> default: Waiting for SSH to become available... 00:01:21.275 ==> default: Configuring and enabling network interfaces... 00:01:24.561 default: SSH address: 192.168.121.76:22 00:01:24.561 default: SSH username: vagrant 00:01:24.561 default: SSH auth method: private key 00:01:27.092 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:35.307 ==> default: Mounting SSHFS shared folder... 00:01:36.685 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:36.685 ==> default: Checking Mount.. 00:01:38.065 ==> default: Folder Successfully Mounted! 00:01:38.065 ==> default: Running provisioner: file... 00:01:38.632 default: ~/.gitconfig => .gitconfig 00:01:39.198 00:01:39.198 SUCCESS! 00:01:39.198 00:01:39.198 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:39.198 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:39.198 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:39.198 00:01:39.207 [Pipeline] } 00:01:39.225 [Pipeline] // stage 00:01:39.235 [Pipeline] dir 00:01:39.235 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt 00:01:39.237 [Pipeline] { 00:01:39.250 [Pipeline] catchError 00:01:39.252 [Pipeline] { 00:01:39.264 [Pipeline] sh 00:01:39.544 + vagrant ssh-config --host vagrant 00:01:39.544 + sed -ne /^Host/,$p 00:01:39.544 + tee ssh_conf 00:01:42.933 Host vagrant 00:01:42.933 HostName 192.168.121.76 00:01:42.933 User vagrant 00:01:42.933 Port 22 00:01:42.933 UserKnownHostsFile /dev/null 00:01:42.933 StrictHostKeyChecking no 00:01:42.933 PasswordAuthentication no 00:01:42.933 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:42.933 IdentitiesOnly yes 00:01:42.933 LogLevel FATAL 00:01:42.933 ForwardAgent yes 00:01:42.933 ForwardX11 yes 00:01:42.933 00:01:42.947 [Pipeline] withEnv 00:01:42.950 [Pipeline] { 00:01:42.964 [Pipeline] sh 00:01:43.244 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:43.244 source /etc/os-release 00:01:43.244 [[ -e /image.version ]] && img=$(< /image.version) 00:01:43.244 # Minimal, systemd-like check. 00:01:43.244 if [[ -e /.dockerenv ]]; then 00:01:43.244 # Clear garbage from the node's name: 00:01:43.244 # agt-er_autotest_547-896 -> autotest_547-896 00:01:43.244 # $HOSTNAME is the actual container id 00:01:43.244 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:43.244 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:43.244 # We can assume this is a mount from a host where container is running, 00:01:43.244 # so fetch its hostname to easily identify the target swarm worker. 00:01:43.244 container="$(< /etc/hostname) ($agent)" 00:01:43.244 else 00:01:43.244 # Fallback 00:01:43.244 container=$agent 00:01:43.244 fi 00:01:43.244 fi 00:01:43.244 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:43.244 00:01:43.514 [Pipeline] } 00:01:43.532 [Pipeline] // withEnv 00:01:43.541 [Pipeline] setCustomBuildProperty 00:01:43.556 [Pipeline] stage 00:01:43.558 [Pipeline] { (Tests) 00:01:43.576 [Pipeline] sh 00:01:43.855 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:44.127 [Pipeline] sh 00:01:44.410 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:44.686 [Pipeline] timeout 00:01:44.686 Timeout set to expire in 1 hr 0 min 00:01:44.689 [Pipeline] { 00:01:44.706 [Pipeline] sh 00:01:44.986 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:45.554 HEAD is now at 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:01:45.567 [Pipeline] sh 00:01:45.846 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:46.119 [Pipeline] sh 00:01:46.399 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:46.677 [Pipeline] sh 00:01:46.958 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:01:47.217 ++ readlink -f spdk_repo 00:01:47.217 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:47.217 + [[ -n /home/vagrant/spdk_repo ]] 00:01:47.217 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:47.217 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:47.217 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:47.217 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:47.217 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:47.217 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:01:47.217 + cd /home/vagrant/spdk_repo 00:01:47.217 + source /etc/os-release 00:01:47.217 ++ NAME='Fedora Linux' 00:01:47.217 ++ VERSION='39 (Cloud Edition)' 00:01:47.217 ++ ID=fedora 00:01:47.217 ++ VERSION_ID=39 00:01:47.217 ++ VERSION_CODENAME= 00:01:47.217 ++ PLATFORM_ID=platform:f39 00:01:47.217 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:47.217 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:47.217 ++ LOGO=fedora-logo-icon 00:01:47.217 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:47.217 ++ HOME_URL=https://fedoraproject.org/ 00:01:47.217 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:47.217 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:47.217 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:47.217 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:47.217 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:47.217 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:47.217 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:47.217 ++ SUPPORT_END=2024-11-12 00:01:47.217 ++ VARIANT='Cloud Edition' 00:01:47.217 ++ VARIANT_ID=cloud 00:01:47.217 + uname -a 00:01:47.217 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:47.217 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:47.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:47.476 Hugepages 00:01:47.476 node hugesize free / total 00:01:47.476 node0 1048576kB 0 / 0 00:01:47.735 node0 2048kB 0 / 0 00:01:47.735 00:01:47.735 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.735 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:47.735 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:47.735 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:47.735 + rm -f /tmp/spdk-ld-path 00:01:47.735 + source autorun-spdk.conf 00:01:47.735 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.735 ++ SPDK_TEST_NVMF=1 00:01:47.735 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.735 ++ SPDK_TEST_USDT=1 00:01:47.735 ++ SPDK_TEST_NVMF_MDNS=1 00:01:47.735 ++ SPDK_RUN_UBSAN=1 00:01:47.735 ++ NET_TYPE=virt 00:01:47.735 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:47.735 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.735 ++ RUN_NIGHTLY=0 00:01:47.735 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.735 + [[ -n '' ]] 00:01:47.735 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:47.735 + for M in /var/spdk/build-*-manifest.txt 00:01:47.735 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:47.735 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.735 + for M in /var/spdk/build-*-manifest.txt 00:01:47.735 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.735 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.735 + for M in /var/spdk/build-*-manifest.txt 00:01:47.735 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.735 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:47.735 ++ uname 00:01:47.735 + [[ Linux == \L\i\n\u\x ]] 00:01:47.735 + sudo dmesg -T 00:01:47.735 + sudo dmesg --clear 00:01:47.735 + dmesg_pid=5371 00:01:47.735 + sudo dmesg -Tw 00:01:47.735 + [[ Fedora Linux == FreeBSD ]] 00:01:47.735 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.735 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.735 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.735 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.735 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.735 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.735 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.736 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.736 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.736 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.736 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.736 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.736 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.736 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.736 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.994 08:55:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.994 08:55:26 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:47.994 08:55:26 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:47.994 08:55:26 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.994 08:55:26 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:47.994 08:55:26 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.994 08:55:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:47.994 08:55:26 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.994 08:55:26 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.994 08:55:26 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.994 08:55:26 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.994 08:55:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.995 08:55:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.995 08:55:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.995 08:55:26 -- paths/export.sh@5 -- $ export PATH 00:01:47.995 08:55:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.995 08:55:26 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:47.995 08:55:26 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:47.995 08:55:26 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732092926.XXXXXX 00:01:47.995 08:55:26 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732092926.k7c8Bd 00:01:47.995 08:55:26 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:47.995 08:55:26 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:47.995 08:55:26 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:47.995 08:55:26 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:47.995 08:55:26 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.995 08:55:26 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:47.995 08:55:26 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.995 08:55:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.995 08:55:26 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:01:47.995 08:55:26 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:47.995 08:55:26 -- pm/common@17 -- $ local monitor 00:01:47.995 08:55:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.995 08:55:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.995 08:55:26 -- pm/common@25 -- $ sleep 1 00:01:47.995 08:55:26 -- pm/common@21 -- $ date +%s 00:01:47.995 08:55:26 -- pm/common@21 -- $ date +%s 00:01:47.995 08:55:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732092926 00:01:47.995 08:55:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732092926 00:01:47.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732092926_collect-cpu-load.pm.log 00:01:47.995 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732092926_collect-vmstat.pm.log 00:01:48.930 08:55:27 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:48.930 08:55:27 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.930 08:55:27 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.930 08:55:27 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:48.930 08:55:27 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.930 Wed Nov 20 08:55:27 AM UTC 2024 00:01:48.930 08:55:27 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.930 v25.01-pre-204-g4f0cbdcd1 00:01:48.930 08:55:27 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.930 08:55:27 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.930 08:55:27 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.930 08:55:27 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.930 08:55:27 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.930 08:55:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.930 ************************************ 00:01:48.930 START TEST ubsan 00:01:48.930 ************************************ 00:01:48.930 using ubsan 00:01:48.930 08:55:27 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:48.930 00:01:48.930 real 0m0.000s 00:01:48.930 user 0m0.000s 00:01:48.930 sys 0m0.000s 00:01:48.930 08:55:27 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.930 08:55:27 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.930 ************************************ 00:01:48.930 END TEST ubsan 00:01:48.930 ************************************ 00:01:48.930 08:55:27 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.930 08:55:27 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.930 08:55:27 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.930 08:55:27 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.930 08:55:27 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.930 08:55:27 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:49.189 08:55:27 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:49.189 08:55:27 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:49.189 08:55:27 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:01:49.189 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:49.189 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:49.758 Using 'verbs' RDMA provider 00:02:05.206 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:17.410 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:17.410 go version go1.21.1 linux/amd64 00:02:17.410 Creating mk/config.mk...done. 00:02:17.410 Creating mk/cc.flags.mk...done. 00:02:17.410 Type 'make' to build. 00:02:17.410 08:55:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:17.410 08:55:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.410 08:55:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.410 08:55:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.410 ************************************ 00:02:17.410 START TEST make 00:02:17.410 ************************************ 00:02:17.410 08:55:56 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:17.976 make[1]: Nothing to be done for 'all'. 00:02:32.873 The Meson build system 00:02:32.873 Version: 1.5.0 00:02:32.873 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:32.873 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:32.873 Build type: native build 00:02:32.873 Program cat found: YES (/usr/bin/cat) 00:02:32.873 Project name: DPDK 00:02:32.873 Project version: 24.03.0 00:02:32.873 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:32.873 C linker for the host machine: cc ld.bfd 2.40-14 00:02:32.873 Host machine cpu family: x86_64 00:02:32.873 Host machine cpu: x86_64 00:02:32.873 Message: ## Building in Developer Mode ## 00:02:32.873 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:32.873 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:32.873 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:32.873 Program python3 found: YES (/usr/bin/python3) 00:02:32.873 Program cat found: YES (/usr/bin/cat) 00:02:32.873 Compiler for C supports arguments -march=native: YES 00:02:32.873 Checking for size of "void *" : 8 00:02:32.873 Checking for size of "void *" : 8 (cached) 00:02:32.873 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:32.873 Library m found: YES 00:02:32.873 Library numa found: YES 00:02:32.873 Has header "numaif.h" : YES 00:02:32.873 Library fdt found: NO 00:02:32.873 Library execinfo found: NO 00:02:32.873 Has header "execinfo.h" : YES 00:02:32.873 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:32.873 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:32.873 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:32.873 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:32.873 Run-time dependency openssl found: YES 3.1.1 00:02:32.873 Run-time dependency libpcap found: YES 1.10.4 00:02:32.873 Has header "pcap.h" with dependency libpcap: YES 00:02:32.873 Compiler for C supports arguments -Wcast-qual: YES 00:02:32.873 Compiler for C supports arguments -Wdeprecated: YES 00:02:32.873 Compiler for C supports arguments -Wformat: YES 00:02:32.873 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:32.873 Compiler for C supports arguments -Wformat-security: NO 00:02:32.873 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.873 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:32.873 Compiler for C supports arguments -Wnested-externs: YES 00:02:32.873 Compiler for C supports arguments -Wold-style-definition: YES 00:02:32.873 Compiler for C supports arguments -Wpointer-arith: YES 00:02:32.873 Compiler for C supports arguments -Wsign-compare: YES 00:02:32.873 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:32.873 Compiler for C supports arguments -Wundef: YES 00:02:32.873 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.873 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:32.873 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:32.873 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.873 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:32.873 Program objdump found: YES (/usr/bin/objdump) 00:02:32.873 Compiler for C supports arguments -mavx512f: YES 00:02:32.873 Checking if "AVX512 checking" compiles: YES 00:02:32.873 Fetching value of define "__SSE4_2__" : 1 00:02:32.873 Fetching value of define "__AES__" : 1 00:02:32.873 Fetching value of define "__AVX__" : 1 00:02:32.873 Fetching value of define "__AVX2__" : 1 00:02:32.873 Fetching value of define "__AVX512BW__" : (undefined) 00:02:32.873 Fetching value of define "__AVX512CD__" : (undefined) 00:02:32.873 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:32.873 Fetching value of define "__AVX512F__" : (undefined) 00:02:32.873 Fetching value of define "__AVX512VL__" : (undefined) 00:02:32.873 Fetching value of define "__PCLMUL__" : 1 00:02:32.873 Fetching value of define "__RDRND__" : 1 00:02:32.873 Fetching value of define "__RDSEED__" : 1 00:02:32.873 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:32.873 Fetching value of define "__znver1__" : (undefined) 00:02:32.873 Fetching value of define "__znver2__" : (undefined) 00:02:32.873 Fetching value of define "__znver3__" : (undefined) 00:02:32.873 Fetching value of define "__znver4__" : (undefined) 00:02:32.873 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:32.873 Message: lib/log: Defining dependency "log" 00:02:32.873 Message: lib/kvargs: Defining dependency "kvargs" 00:02:32.873 Message: lib/telemetry: Defining dependency "telemetry" 00:02:32.873 Checking for function "getentropy" : NO 00:02:32.873 Message: lib/eal: Defining dependency "eal" 00:02:32.873 Message: lib/ring: Defining dependency "ring" 00:02:32.873 Message: lib/rcu: Defining dependency "rcu" 00:02:32.873 Message: lib/mempool: Defining dependency "mempool" 00:02:32.873 Message: lib/mbuf: Defining dependency "mbuf" 00:02:32.873 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:32.873 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.873 Compiler for C supports arguments -mpclmul: YES 00:02:32.873 Compiler for C supports arguments -maes: YES 00:02:32.873 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.873 Compiler for C supports arguments -mavx512bw: YES 00:02:32.873 Compiler for C supports arguments -mavx512dq: YES 00:02:32.873 Compiler for C supports arguments -mavx512vl: YES 00:02:32.873 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:32.873 Compiler for C supports arguments -mavx2: YES 00:02:32.873 Compiler for C supports arguments -mavx: YES 00:02:32.873 Message: lib/net: Defining dependency "net" 00:02:32.873 Message: lib/meter: Defining dependency "meter" 00:02:32.873 Message: lib/ethdev: Defining dependency "ethdev" 00:02:32.873 Message: lib/pci: Defining dependency "pci" 00:02:32.873 Message: lib/cmdline: Defining dependency "cmdline" 00:02:32.873 Message: lib/hash: Defining dependency "hash" 00:02:32.873 Message: lib/timer: Defining dependency "timer" 00:02:32.873 Message: lib/compressdev: Defining dependency "compressdev" 00:02:32.873 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:32.873 Message: lib/dmadev: Defining dependency "dmadev" 00:02:32.873 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:32.873 Message: lib/power: Defining dependency "power" 00:02:32.873 Message: lib/reorder: Defining dependency "reorder" 00:02:32.873 Message: lib/security: Defining dependency "security" 00:02:32.873 Has header "linux/userfaultfd.h" : YES 00:02:32.873 Has header "linux/vduse.h" : YES 00:02:32.873 Message: lib/vhost: Defining dependency "vhost" 00:02:32.873 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:32.873 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:32.873 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:32.873 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:32.873 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:32.873 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:32.873 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:32.873 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:32.873 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:32.873 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:32.873 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:32.873 Configuring doxy-api-html.conf using configuration 00:02:32.873 Configuring doxy-api-man.conf using configuration 00:02:32.873 Program mandb found: YES (/usr/bin/mandb) 00:02:32.873 Program sphinx-build found: NO 00:02:32.873 Configuring rte_build_config.h using configuration 00:02:32.873 Message: 00:02:32.873 ================= 00:02:32.873 Applications Enabled 00:02:32.873 ================= 00:02:32.873 00:02:32.873 apps: 00:02:32.873 00:02:32.873 00:02:32.873 Message: 00:02:32.873 ================= 00:02:32.873 Libraries Enabled 00:02:32.873 ================= 00:02:32.873 00:02:32.873 libs: 00:02:32.873 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:32.873 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:32.873 cryptodev, dmadev, power, reorder, security, vhost, 00:02:32.873 00:02:32.873 Message: 00:02:32.873 =============== 00:02:32.873 Drivers Enabled 00:02:32.873 =============== 00:02:32.873 00:02:32.873 common: 00:02:32.873 00:02:32.873 bus: 00:02:32.873 pci, vdev, 00:02:32.873 mempool: 00:02:32.873 ring, 00:02:32.873 dma: 00:02:32.873 00:02:32.873 net: 00:02:32.873 00:02:32.873 crypto: 00:02:32.873 00:02:32.873 compress: 00:02:32.873 00:02:32.873 vdpa: 00:02:32.873 00:02:32.873 00:02:32.873 Message: 00:02:32.873 ================= 00:02:32.873 Content Skipped 00:02:32.873 ================= 00:02:32.873 00:02:32.873 apps: 00:02:32.873 dumpcap: explicitly disabled via build config 00:02:32.873 graph: explicitly disabled via build config 00:02:32.873 pdump: explicitly disabled via build config 00:02:32.873 proc-info: explicitly disabled via build config 00:02:32.873 test-acl: explicitly disabled via build config 00:02:32.873 test-bbdev: explicitly disabled via build config 00:02:32.873 test-cmdline: explicitly disabled via build config 00:02:32.873 test-compress-perf: explicitly disabled via build config 00:02:32.873 test-crypto-perf: explicitly disabled via build config 00:02:32.873 test-dma-perf: explicitly disabled via build config 00:02:32.873 test-eventdev: explicitly disabled via build config 00:02:32.873 test-fib: explicitly disabled via build config 00:02:32.873 test-flow-perf: explicitly disabled via build config 00:02:32.873 test-gpudev: explicitly disabled via build config 00:02:32.873 test-mldev: explicitly disabled via build config 00:02:32.873 test-pipeline: explicitly disabled via build config 00:02:32.873 test-pmd: explicitly disabled via build config 00:02:32.873 test-regex: explicitly disabled via build config 00:02:32.874 test-sad: explicitly disabled via build config 00:02:32.874 test-security-perf: explicitly disabled via build config 00:02:32.874 00:02:32.874 libs: 00:02:32.874 argparse: explicitly disabled via build config 00:02:32.874 metrics: explicitly disabled via build config 00:02:32.874 acl: explicitly disabled via build config 00:02:32.874 bbdev: explicitly disabled via build config 00:02:32.874 bitratestats: explicitly disabled via build config 00:02:32.874 bpf: explicitly disabled via build config 00:02:32.874 cfgfile: explicitly disabled via build config 00:02:32.874 distributor: explicitly disabled via build config 00:02:32.874 efd: explicitly disabled via build config 00:02:32.874 eventdev: explicitly disabled via build config 00:02:32.874 dispatcher: explicitly disabled via build config 00:02:32.874 gpudev: explicitly disabled via build config 00:02:32.874 gro: explicitly disabled via build config 00:02:32.874 gso: explicitly disabled via build config 00:02:32.874 ip_frag: explicitly disabled via build config 00:02:32.874 jobstats: explicitly disabled via build config 00:02:32.874 latencystats: explicitly disabled via build config 00:02:32.874 lpm: explicitly disabled via build config 00:02:32.874 member: explicitly disabled via build config 00:02:32.874 pcapng: explicitly disabled via build config 00:02:32.874 rawdev: explicitly disabled via build config 00:02:32.874 regexdev: explicitly disabled via build config 00:02:32.874 mldev: explicitly disabled via build config 00:02:32.874 rib: explicitly disabled via build config 00:02:32.874 sched: explicitly disabled via build config 00:02:32.874 stack: explicitly disabled via build config 00:02:32.874 ipsec: explicitly disabled via build config 00:02:32.874 pdcp: explicitly disabled via build config 00:02:32.874 fib: explicitly disabled via build config 00:02:32.874 port: explicitly disabled via build config 00:02:32.874 pdump: explicitly disabled via build config 00:02:32.874 table: explicitly disabled via build config 00:02:32.874 pipeline: explicitly disabled via build config 00:02:32.874 graph: explicitly disabled via build config 00:02:32.874 node: explicitly disabled via build config 00:02:32.874 00:02:32.874 drivers: 00:02:32.874 common/cpt: not in enabled drivers build config 00:02:32.874 common/dpaax: not in enabled drivers build config 00:02:32.874 common/iavf: not in enabled drivers build config 00:02:32.874 common/idpf: not in enabled drivers build config 00:02:32.874 common/ionic: not in enabled drivers build config 00:02:32.874 common/mvep: not in enabled drivers build config 00:02:32.874 common/octeontx: not in enabled drivers build config 00:02:32.874 bus/auxiliary: not in enabled drivers build config 00:02:32.874 bus/cdx: not in enabled drivers build config 00:02:32.874 bus/dpaa: not in enabled drivers build config 00:02:32.874 bus/fslmc: not in enabled drivers build config 00:02:32.874 bus/ifpga: not in enabled drivers build config 00:02:32.874 bus/platform: not in enabled drivers build config 00:02:32.874 bus/uacce: not in enabled drivers build config 00:02:32.874 bus/vmbus: not in enabled drivers build config 00:02:32.874 common/cnxk: not in enabled drivers build config 00:02:32.874 common/mlx5: not in enabled drivers build config 00:02:32.874 common/nfp: not in enabled drivers build config 00:02:32.874 common/nitrox: not in enabled drivers build config 00:02:32.874 common/qat: not in enabled drivers build config 00:02:32.874 common/sfc_efx: not in enabled drivers build config 00:02:32.874 mempool/bucket: not in enabled drivers build config 00:02:32.874 mempool/cnxk: not in enabled drivers build config 00:02:32.874 mempool/dpaa: not in enabled drivers build config 00:02:32.874 mempool/dpaa2: not in enabled drivers build config 00:02:32.874 mempool/octeontx: not in enabled drivers build config 00:02:32.874 mempool/stack: not in enabled drivers build config 00:02:32.874 dma/cnxk: not in enabled drivers build config 00:02:32.874 dma/dpaa: not in enabled drivers build config 00:02:32.874 dma/dpaa2: not in enabled drivers build config 00:02:32.874 dma/hisilicon: not in enabled drivers build config 00:02:32.874 dma/idxd: not in enabled drivers build config 00:02:32.874 dma/ioat: not in enabled drivers build config 00:02:32.874 dma/skeleton: not in enabled drivers build config 00:02:32.874 net/af_packet: not in enabled drivers build config 00:02:32.874 net/af_xdp: not in enabled drivers build config 00:02:32.874 net/ark: not in enabled drivers build config 00:02:32.874 net/atlantic: not in enabled drivers build config 00:02:32.874 net/avp: not in enabled drivers build config 00:02:32.874 net/axgbe: not in enabled drivers build config 00:02:32.874 net/bnx2x: not in enabled drivers build config 00:02:32.874 net/bnxt: not in enabled drivers build config 00:02:32.874 net/bonding: not in enabled drivers build config 00:02:32.874 net/cnxk: not in enabled drivers build config 00:02:32.874 net/cpfl: not in enabled drivers build config 00:02:32.874 net/cxgbe: not in enabled drivers build config 00:02:32.874 net/dpaa: not in enabled drivers build config 00:02:32.874 net/dpaa2: not in enabled drivers build config 00:02:32.874 net/e1000: not in enabled drivers build config 00:02:32.874 net/ena: not in enabled drivers build config 00:02:32.874 net/enetc: not in enabled drivers build config 00:02:32.874 net/enetfec: not in enabled drivers build config 00:02:32.874 net/enic: not in enabled drivers build config 00:02:32.874 net/failsafe: not in enabled drivers build config 00:02:32.874 net/fm10k: not in enabled drivers build config 00:02:32.874 net/gve: not in enabled drivers build config 00:02:32.874 net/hinic: not in enabled drivers build config 00:02:32.874 net/hns3: not in enabled drivers build config 00:02:32.874 net/i40e: not in enabled drivers build config 00:02:32.874 net/iavf: not in enabled drivers build config 00:02:32.874 net/ice: not in enabled drivers build config 00:02:32.874 net/idpf: not in enabled drivers build config 00:02:32.874 net/igc: not in enabled drivers build config 00:02:32.874 net/ionic: not in enabled drivers build config 00:02:32.874 net/ipn3ke: not in enabled drivers build config 00:02:32.874 net/ixgbe: not in enabled drivers build config 00:02:32.874 net/mana: not in enabled drivers build config 00:02:32.874 net/memif: not in enabled drivers build config 00:02:32.874 net/mlx4: not in enabled drivers build config 00:02:32.874 net/mlx5: not in enabled drivers build config 00:02:32.874 net/mvneta: not in enabled drivers build config 00:02:32.874 net/mvpp2: not in enabled drivers build config 00:02:32.874 net/netvsc: not in enabled drivers build config 00:02:32.874 net/nfb: not in enabled drivers build config 00:02:32.874 net/nfp: not in enabled drivers build config 00:02:32.874 net/ngbe: not in enabled drivers build config 00:02:32.874 net/null: not in enabled drivers build config 00:02:32.874 net/octeontx: not in enabled drivers build config 00:02:32.874 net/octeon_ep: not in enabled drivers build config 00:02:32.874 net/pcap: not in enabled drivers build config 00:02:32.874 net/pfe: not in enabled drivers build config 00:02:32.874 net/qede: not in enabled drivers build config 00:02:32.874 net/ring: not in enabled drivers build config 00:02:32.874 net/sfc: not in enabled drivers build config 00:02:32.874 net/softnic: not in enabled drivers build config 00:02:32.874 net/tap: not in enabled drivers build config 00:02:32.874 net/thunderx: not in enabled drivers build config 00:02:32.874 net/txgbe: not in enabled drivers build config 00:02:32.874 net/vdev_netvsc: not in enabled drivers build config 00:02:32.874 net/vhost: not in enabled drivers build config 00:02:32.874 net/virtio: not in enabled drivers build config 00:02:32.874 net/vmxnet3: not in enabled drivers build config 00:02:32.874 raw/*: missing internal dependency, "rawdev" 00:02:32.874 crypto/armv8: not in enabled drivers build config 00:02:32.874 crypto/bcmfs: not in enabled drivers build config 00:02:32.874 crypto/caam_jr: not in enabled drivers build config 00:02:32.874 crypto/ccp: not in enabled drivers build config 00:02:32.874 crypto/cnxk: not in enabled drivers build config 00:02:32.874 crypto/dpaa_sec: not in enabled drivers build config 00:02:32.874 crypto/dpaa2_sec: not in enabled drivers build config 00:02:32.874 crypto/ipsec_mb: not in enabled drivers build config 00:02:32.874 crypto/mlx5: not in enabled drivers build config 00:02:32.874 crypto/mvsam: not in enabled drivers build config 00:02:32.874 crypto/nitrox: not in enabled drivers build config 00:02:32.874 crypto/null: not in enabled drivers build config 00:02:32.874 crypto/octeontx: not in enabled drivers build config 00:02:32.874 crypto/openssl: not in enabled drivers build config 00:02:32.874 crypto/scheduler: not in enabled drivers build config 00:02:32.874 crypto/uadk: not in enabled drivers build config 00:02:32.874 crypto/virtio: not in enabled drivers build config 00:02:32.874 compress/isal: not in enabled drivers build config 00:02:32.874 compress/mlx5: not in enabled drivers build config 00:02:32.874 compress/nitrox: not in enabled drivers build config 00:02:32.874 compress/octeontx: not in enabled drivers build config 00:02:32.874 compress/zlib: not in enabled drivers build config 00:02:32.874 regex/*: missing internal dependency, "regexdev" 00:02:32.874 ml/*: missing internal dependency, "mldev" 00:02:32.874 vdpa/ifc: not in enabled drivers build config 00:02:32.874 vdpa/mlx5: not in enabled drivers build config 00:02:32.874 vdpa/nfp: not in enabled drivers build config 00:02:32.874 vdpa/sfc: not in enabled drivers build config 00:02:32.874 event/*: missing internal dependency, "eventdev" 00:02:32.874 baseband/*: missing internal dependency, "bbdev" 00:02:32.874 gpu/*: missing internal dependency, "gpudev" 00:02:32.874 00:02:32.874 00:02:32.874 Build targets in project: 85 00:02:32.874 00:02:32.874 DPDK 24.03.0 00:02:32.874 00:02:32.874 User defined options 00:02:32.874 buildtype : debug 00:02:32.874 default_library : shared 00:02:32.874 libdir : lib 00:02:32.874 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:32.874 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:32.874 c_link_args : 00:02:32.874 cpu_instruction_set: native 00:02:32.874 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:32.875 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:32.875 enable_docs : false 00:02:32.875 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:32.875 enable_kmods : false 00:02:32.875 max_lcores : 128 00:02:32.875 tests : false 00:02:32.875 00:02:32.875 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.875 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:32.875 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:32.875 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:32.875 [3/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:32.875 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:32.875 [5/268] Linking static target lib/librte_kvargs.a 00:02:32.875 [6/268] Linking static target lib/librte_log.a 00:02:32.875 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.875 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:32.875 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:32.875 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.133 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:33.133 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.133 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.133 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.133 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.133 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.133 [17/268] Linking static target lib/librte_telemetry.a 00:02:33.133 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.133 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.392 [20/268] Linking target lib/librte_log.so.24.1 00:02:33.650 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:33.650 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:33.650 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:33.908 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.908 [25/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:33.908 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:33.908 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.908 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.908 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:34.166 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.166 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:34.166 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.166 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:34.166 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:34.424 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.425 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:34.425 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.682 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:34.941 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.941 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:34.941 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.941 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:34.941 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.941 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:34.941 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.941 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:34.941 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.199 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:35.457 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:35.457 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:35.457 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:35.715 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.991 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:35.991 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:35.991 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:35.991 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:35.991 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:35.991 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:35.991 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:36.258 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:36.517 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:36.517 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:36.517 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:36.517 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:36.777 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:36.777 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.035 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.035 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.035 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:37.035 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.293 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.293 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:37.552 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.552 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.552 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.552 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.552 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:37.552 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:37.810 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.810 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:37.810 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:38.068 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:38.068 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.068 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:38.327 [85/268] Linking static target lib/librte_eal.a 00:02:38.327 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:38.327 [87/268] Linking static target lib/librte_ring.a 00:02:38.327 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.327 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.327 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.327 [91/268] Linking static target lib/librte_rcu.a 00:02:38.586 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:38.586 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:38.586 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:38.586 [95/268] Linking static target lib/librte_mempool.a 00:02:38.586 [96/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:38.586 [97/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:38.844 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.844 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.844 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:39.103 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:39.103 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:39.103 [103/268] Linking static target lib/librte_mbuf.a 00:02:39.362 [104/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:39.362 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:39.362 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:39.362 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:39.362 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:39.362 [109/268] Linking static target lib/librte_net.a 00:02:39.621 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:39.621 [111/268] Linking static target lib/librte_meter.a 00:02:39.879 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.879 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.879 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:39.879 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.879 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:40.137 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:40.138 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:40.138 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.704 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:40.704 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:40.704 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:40.704 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:40.962 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:40.962 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.962 [126/268] Linking static target lib/librte_pci.a 00:02:41.220 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:41.220 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:41.220 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.220 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.478 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.478 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:41.478 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:41.478 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.478 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:41.478 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:41.478 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.478 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.478 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.478 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.478 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.478 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.737 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:41.737 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:41.737 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:41.995 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:41.995 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:41.995 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:41.995 [149/268] Linking static target lib/librte_cmdline.a 00:02:41.995 [150/268] Linking static target lib/librte_ethdev.a 00:02:42.254 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.254 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.512 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:42.512 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.512 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:42.512 [156/268] Linking static target lib/librte_timer.a 00:02:42.512 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:43.081 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:43.081 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.081 [160/268] Linking static target lib/librte_compressdev.a 00:02:43.081 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:43.081 [162/268] Linking static target lib/librte_hash.a 00:02:43.340 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.340 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.340 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:43.340 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.598 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:43.598 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:43.598 [169/268] Linking static target lib/librte_dmadev.a 00:02:43.857 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.857 [171/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.857 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:43.857 [173/268] Linking static target lib/librte_cryptodev.a 00:02:43.857 [174/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:43.857 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.857 [176/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.116 [177/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.116 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.374 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.374 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.633 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.633 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.633 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.633 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.892 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.150 [186/268] Linking static target lib/librte_reorder.a 00:02:45.150 [187/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:45.150 [188/268] Linking static target lib/librte_power.a 00:02:45.150 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.150 [190/268] Linking static target lib/librte_security.a 00:02:45.150 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.408 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.677 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:45.677 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.677 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.265 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.265 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.265 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:46.265 [199/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.524 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:46.524 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.524 [202/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.783 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:46.783 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.783 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:47.042 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:47.042 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:47.042 [208/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.300 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:47.300 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:47.300 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:47.300 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:47.300 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.559 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.559 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.559 [216/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.559 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.559 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.559 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:47.559 [220/268] Linking static target drivers/librte_bus_vdev.a 00:02:47.559 [221/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:47.559 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:47.818 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.818 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:47.818 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.818 [226/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.818 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:47.818 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.753 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:48.753 [230/268] Linking static target lib/librte_vhost.a 00:02:49.319 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.575 [232/268] Linking target lib/librte_eal.so.24.1 00:02:49.575 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:49.834 [234/268] Linking target lib/librte_ring.so.24.1 00:02:49.834 [235/268] Linking target lib/librte_pci.so.24.1 00:02:49.834 [236/268] Linking target lib/librte_timer.so.24.1 00:02:49.834 [237/268] Linking target lib/librte_meter.so.24.1 00:02:49.834 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:49.834 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:49.834 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:49.834 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:49.834 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:49.834 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:49.834 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:49.834 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:49.834 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:49.834 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:50.092 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.092 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:50.092 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:50.092 [251/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.092 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:50.092 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:50.349 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:50.349 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:50.349 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:02:50.349 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:50.349 [258/268] Linking target lib/librte_net.so.24.1 00:02:50.349 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:50.349 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:50.606 [261/268] Linking target lib/librte_security.so.24.1 00:02:50.606 [262/268] Linking target lib/librte_hash.so.24.1 00:02:50.606 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:50.606 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:50.606 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:50.606 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:50.606 [267/268] Linking target lib/librte_power.so.24.1 00:02:50.864 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:50.864 INFO: autodetecting backend as ninja 00:02:50.864 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:22.934 CC lib/log/log_flags.o 00:03:22.934 CC lib/log/log.o 00:03:22.934 CC lib/ut_mock/mock.o 00:03:22.934 CC lib/log/log_deprecated.o 00:03:22.934 CC lib/ut/ut.o 00:03:22.934 LIB libspdk_log.a 00:03:22.934 LIB libspdk_ut.a 00:03:22.934 LIB libspdk_ut_mock.a 00:03:22.934 SO libspdk_ut_mock.so.6.0 00:03:22.934 SO libspdk_ut.so.2.0 00:03:22.934 SO libspdk_log.so.7.1 00:03:22.934 SYMLINK libspdk_ut_mock.so 00:03:22.934 SYMLINK libspdk_ut.so 00:03:22.934 SYMLINK libspdk_log.so 00:03:22.934 CXX lib/trace_parser/trace.o 00:03:22.934 CC lib/dma/dma.o 00:03:22.934 CC lib/util/base64.o 00:03:22.934 CC lib/util/bit_array.o 00:03:22.934 CC lib/util/cpuset.o 00:03:22.934 CC lib/util/crc32.o 00:03:22.934 CC lib/util/crc16.o 00:03:22.934 CC lib/ioat/ioat.o 00:03:22.934 CC lib/util/crc32c.o 00:03:22.934 CC lib/vfio_user/host/vfio_user_pci.o 00:03:22.934 CC lib/util/crc32_ieee.o 00:03:22.934 CC lib/util/crc64.o 00:03:22.934 CC lib/util/dif.o 00:03:22.934 CC lib/util/fd.o 00:03:22.934 LIB libspdk_dma.a 00:03:22.934 SO libspdk_dma.so.5.0 00:03:22.934 CC lib/util/fd_group.o 00:03:22.934 CC lib/vfio_user/host/vfio_user.o 00:03:22.934 CC lib/util/file.o 00:03:22.934 SYMLINK libspdk_dma.so 00:03:22.934 CC lib/util/hexlify.o 00:03:22.934 CC lib/util/iov.o 00:03:22.934 CC lib/util/math.o 00:03:22.934 CC lib/util/net.o 00:03:22.934 LIB libspdk_ioat.a 00:03:22.934 SO libspdk_ioat.so.7.0 00:03:22.934 CC lib/util/pipe.o 00:03:22.934 CC lib/util/strerror_tls.o 00:03:22.934 CC lib/util/string.o 00:03:22.934 CC lib/util/uuid.o 00:03:22.934 SYMLINK libspdk_ioat.so 00:03:22.934 CC lib/util/xor.o 00:03:22.934 LIB libspdk_vfio_user.a 00:03:22.934 CC lib/util/zipf.o 00:03:22.934 CC lib/util/md5.o 00:03:22.934 SO libspdk_vfio_user.so.5.0 00:03:22.934 SYMLINK libspdk_vfio_user.so 00:03:22.934 LIB libspdk_util.a 00:03:22.934 SO libspdk_util.so.10.1 00:03:22.934 LIB libspdk_trace_parser.a 00:03:22.934 SO libspdk_trace_parser.so.6.0 00:03:22.934 SYMLINK libspdk_util.so 00:03:22.934 SYMLINK libspdk_trace_parser.so 00:03:22.934 CC lib/conf/conf.o 00:03:22.934 CC lib/vmd/vmd.o 00:03:22.934 CC lib/json/json_parse.o 00:03:22.934 CC lib/idxd/idxd_user.o 00:03:22.934 CC lib/env_dpdk/env.o 00:03:22.934 CC lib/idxd/idxd.o 00:03:22.934 CC lib/json/json_util.o 00:03:22.934 CC lib/vmd/led.o 00:03:22.934 CC lib/idxd/idxd_kernel.o 00:03:22.934 CC lib/rdma_utils/rdma_utils.o 00:03:22.934 CC lib/env_dpdk/memory.o 00:03:22.934 CC lib/env_dpdk/pci.o 00:03:22.934 LIB libspdk_conf.a 00:03:22.934 CC lib/env_dpdk/init.o 00:03:22.934 CC lib/env_dpdk/threads.o 00:03:22.934 SO libspdk_conf.so.6.0 00:03:22.934 SYMLINK libspdk_conf.so 00:03:22.934 CC lib/env_dpdk/pci_ioat.o 00:03:22.934 CC lib/json/json_write.o 00:03:22.934 CC lib/env_dpdk/pci_virtio.o 00:03:22.934 LIB libspdk_rdma_utils.a 00:03:22.934 CC lib/env_dpdk/pci_vmd.o 00:03:22.934 SO libspdk_rdma_utils.so.1.0 00:03:22.934 CC lib/env_dpdk/pci_idxd.o 00:03:22.934 CC lib/env_dpdk/pci_event.o 00:03:22.934 LIB libspdk_idxd.a 00:03:22.934 SYMLINK libspdk_rdma_utils.so 00:03:22.934 CC lib/env_dpdk/sigbus_handler.o 00:03:22.934 CC lib/env_dpdk/pci_dpdk.o 00:03:22.934 LIB libspdk_vmd.a 00:03:22.934 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:22.934 SO libspdk_idxd.so.12.1 00:03:22.934 SO libspdk_vmd.so.6.0 00:03:22.934 SYMLINK libspdk_idxd.so 00:03:22.934 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:22.934 SYMLINK libspdk_vmd.so 00:03:22.934 LIB libspdk_json.a 00:03:22.934 SO libspdk_json.so.6.0 00:03:22.934 SYMLINK libspdk_json.so 00:03:22.934 CC lib/rdma_provider/common.o 00:03:22.934 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:22.934 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.934 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.934 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.934 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.934 LIB libspdk_rdma_provider.a 00:03:22.934 SO libspdk_rdma_provider.so.7.0 00:03:22.934 SYMLINK libspdk_rdma_provider.so 00:03:22.934 LIB libspdk_jsonrpc.a 00:03:22.934 LIB libspdk_env_dpdk.a 00:03:22.934 SO libspdk_jsonrpc.so.6.0 00:03:22.934 SYMLINK libspdk_jsonrpc.so 00:03:22.934 SO libspdk_env_dpdk.so.15.1 00:03:22.934 SYMLINK libspdk_env_dpdk.so 00:03:22.934 CC lib/rpc/rpc.o 00:03:22.934 LIB libspdk_rpc.a 00:03:22.934 SO libspdk_rpc.so.6.0 00:03:22.934 SYMLINK libspdk_rpc.so 00:03:22.934 CC lib/keyring/keyring.o 00:03:22.934 CC lib/keyring/keyring_rpc.o 00:03:22.934 CC lib/trace/trace.o 00:03:22.934 CC lib/notify/notify.o 00:03:22.934 CC lib/notify/notify_rpc.o 00:03:22.934 CC lib/trace/trace_flags.o 00:03:22.934 CC lib/trace/trace_rpc.o 00:03:22.934 LIB libspdk_notify.a 00:03:22.934 SO libspdk_notify.so.6.0 00:03:22.934 LIB libspdk_keyring.a 00:03:22.934 SYMLINK libspdk_notify.so 00:03:22.934 SO libspdk_keyring.so.2.0 00:03:22.934 LIB libspdk_trace.a 00:03:22.934 SO libspdk_trace.so.11.0 00:03:22.934 SYMLINK libspdk_keyring.so 00:03:22.934 SYMLINK libspdk_trace.so 00:03:22.934 CC lib/sock/sock_rpc.o 00:03:22.934 CC lib/sock/sock.o 00:03:22.934 CC lib/thread/thread.o 00:03:22.934 CC lib/thread/iobuf.o 00:03:23.498 LIB libspdk_sock.a 00:03:23.498 SO libspdk_sock.so.10.0 00:03:23.498 SYMLINK libspdk_sock.so 00:03:23.757 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:23.757 CC lib/nvme/nvme_fabric.o 00:03:23.757 CC lib/nvme/nvme_ctrlr.o 00:03:23.757 CC lib/nvme/nvme_ns_cmd.o 00:03:23.757 CC lib/nvme/nvme_pcie.o 00:03:23.757 CC lib/nvme/nvme_qpair.o 00:03:23.757 CC lib/nvme/nvme_pcie_common.o 00:03:23.757 CC lib/nvme/nvme.o 00:03:23.757 CC lib/nvme/nvme_ns.o 00:03:24.691 CC lib/nvme/nvme_quirks.o 00:03:24.691 LIB libspdk_thread.a 00:03:24.691 SO libspdk_thread.so.11.0 00:03:24.691 CC lib/nvme/nvme_transport.o 00:03:24.691 CC lib/nvme/nvme_discovery.o 00:03:24.691 SYMLINK libspdk_thread.so 00:03:24.691 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.691 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.950 CC lib/nvme/nvme_tcp.o 00:03:24.950 CC lib/nvme/nvme_opal.o 00:03:24.950 CC lib/nvme/nvme_io_msg.o 00:03:24.950 CC lib/nvme/nvme_poll_group.o 00:03:25.208 CC lib/nvme/nvme_zns.o 00:03:25.490 CC lib/nvme/nvme_stubs.o 00:03:25.490 CC lib/nvme/nvme_auth.o 00:03:25.490 CC lib/nvme/nvme_cuse.o 00:03:25.490 CC lib/nvme/nvme_rdma.o 00:03:26.060 CC lib/accel/accel.o 00:03:26.060 CC lib/blob/blobstore.o 00:03:26.060 CC lib/init/json_config.o 00:03:26.060 CC lib/init/subsystem.o 00:03:26.060 CC lib/init/subsystem_rpc.o 00:03:26.060 CC lib/accel/accel_rpc.o 00:03:26.060 CC lib/accel/accel_sw.o 00:03:26.318 CC lib/init/rpc.o 00:03:26.318 CC lib/blob/request.o 00:03:26.318 CC lib/blob/zeroes.o 00:03:26.318 LIB libspdk_init.a 00:03:26.318 SO libspdk_init.so.6.0 00:03:26.577 CC lib/blob/blob_bs_dev.o 00:03:26.577 SYMLINK libspdk_init.so 00:03:26.577 CC lib/virtio/virtio.o 00:03:26.577 CC lib/virtio/virtio_vhost_user.o 00:03:26.577 CC lib/virtio/virtio_vfio_user.o 00:03:26.577 CC lib/virtio/virtio_pci.o 00:03:26.577 CC lib/fsdev/fsdev.o 00:03:26.835 CC lib/event/app.o 00:03:26.835 CC lib/fsdev/fsdev_io.o 00:03:26.835 CC lib/fsdev/fsdev_rpc.o 00:03:26.835 LIB libspdk_nvme.a 00:03:27.093 CC lib/event/reactor.o 00:03:27.093 CC lib/event/log_rpc.o 00:03:27.093 LIB libspdk_virtio.a 00:03:27.093 LIB libspdk_accel.a 00:03:27.093 SO libspdk_accel.so.16.0 00:03:27.093 SO libspdk_virtio.so.7.0 00:03:27.093 CC lib/event/app_rpc.o 00:03:27.093 SO libspdk_nvme.so.15.0 00:03:27.093 SYMLINK libspdk_accel.so 00:03:27.093 SYMLINK libspdk_virtio.so 00:03:27.093 CC lib/event/scheduler_static.o 00:03:27.351 LIB libspdk_fsdev.a 00:03:27.351 CC lib/bdev/bdev.o 00:03:27.351 CC lib/bdev/bdev_rpc.o 00:03:27.351 CC lib/bdev/bdev_zone.o 00:03:27.351 CC lib/bdev/part.o 00:03:27.351 CC lib/bdev/scsi_nvme.o 00:03:27.351 SO libspdk_fsdev.so.2.0 00:03:27.351 SYMLINK libspdk_nvme.so 00:03:27.351 LIB libspdk_event.a 00:03:27.609 SYMLINK libspdk_fsdev.so 00:03:27.609 SO libspdk_event.so.14.0 00:03:27.609 SYMLINK libspdk_event.so 00:03:27.609 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:28.544 LIB libspdk_fuse_dispatcher.a 00:03:28.544 SO libspdk_fuse_dispatcher.so.1.0 00:03:28.544 SYMLINK libspdk_fuse_dispatcher.so 00:03:29.111 LIB libspdk_blob.a 00:03:29.111 SO libspdk_blob.so.11.0 00:03:29.369 SYMLINK libspdk_blob.so 00:03:29.628 CC lib/lvol/lvol.o 00:03:29.628 CC lib/blobfs/blobfs.o 00:03:29.628 CC lib/blobfs/tree.o 00:03:30.195 LIB libspdk_bdev.a 00:03:30.195 SO libspdk_bdev.so.17.0 00:03:30.195 SYMLINK libspdk_bdev.so 00:03:30.453 LIB libspdk_blobfs.a 00:03:30.453 CC lib/nvmf/ctrlr.o 00:03:30.453 CC lib/nvmf/ctrlr_bdev.o 00:03:30.453 CC lib/nvmf/ctrlr_discovery.o 00:03:30.453 CC lib/nvmf/subsystem.o 00:03:30.453 CC lib/ublk/ublk.o 00:03:30.453 SO libspdk_blobfs.so.10.0 00:03:30.453 CC lib/nbd/nbd.o 00:03:30.453 CC lib/ftl/ftl_core.o 00:03:30.453 CC lib/scsi/dev.o 00:03:30.453 LIB libspdk_lvol.a 00:03:30.711 SYMLINK libspdk_blobfs.so 00:03:30.711 CC lib/nbd/nbd_rpc.o 00:03:30.711 SO libspdk_lvol.so.10.0 00:03:30.711 SYMLINK libspdk_lvol.so 00:03:30.711 CC lib/scsi/lun.o 00:03:30.711 CC lib/scsi/port.o 00:03:30.711 CC lib/scsi/scsi.o 00:03:30.969 CC lib/scsi/scsi_bdev.o 00:03:30.969 LIB libspdk_nbd.a 00:03:30.969 CC lib/nvmf/nvmf.o 00:03:30.969 CC lib/ftl/ftl_init.o 00:03:30.969 SO libspdk_nbd.so.7.0 00:03:30.969 CC lib/ftl/ftl_layout.o 00:03:30.969 SYMLINK libspdk_nbd.so 00:03:30.969 CC lib/nvmf/nvmf_rpc.o 00:03:30.969 CC lib/ftl/ftl_debug.o 00:03:31.227 CC lib/ublk/ublk_rpc.o 00:03:31.227 CC lib/scsi/scsi_pr.o 00:03:31.227 CC lib/ftl/ftl_io.o 00:03:31.227 CC lib/ftl/ftl_sb.o 00:03:31.485 LIB libspdk_ublk.a 00:03:31.485 CC lib/ftl/ftl_l2p.o 00:03:31.485 SO libspdk_ublk.so.3.0 00:03:31.485 CC lib/scsi/scsi_rpc.o 00:03:31.485 SYMLINK libspdk_ublk.so 00:03:31.485 CC lib/scsi/task.o 00:03:31.485 CC lib/ftl/ftl_l2p_flat.o 00:03:31.485 CC lib/ftl/ftl_nv_cache.o 00:03:31.485 CC lib/nvmf/transport.o 00:03:31.485 CC lib/ftl/ftl_band.o 00:03:31.744 CC lib/nvmf/tcp.o 00:03:31.744 LIB libspdk_scsi.a 00:03:31.744 CC lib/ftl/ftl_band_ops.o 00:03:31.744 SO libspdk_scsi.so.9.0 00:03:31.744 CC lib/ftl/ftl_writer.o 00:03:31.744 SYMLINK libspdk_scsi.so 00:03:31.744 CC lib/ftl/ftl_rq.o 00:03:32.002 CC lib/nvmf/stubs.o 00:03:32.002 CC lib/nvmf/mdns_server.o 00:03:32.002 CC lib/nvmf/rdma.o 00:03:32.002 CC lib/ftl/ftl_reloc.o 00:03:32.002 CC lib/nvmf/auth.o 00:03:32.260 CC lib/ftl/ftl_l2p_cache.o 00:03:32.518 CC lib/ftl/ftl_p2l.o 00:03:32.518 CC lib/ftl/ftl_p2l_log.o 00:03:32.518 CC lib/iscsi/conn.o 00:03:32.518 CC lib/vhost/vhost.o 00:03:32.518 CC lib/ftl/mngt/ftl_mngt.o 00:03:32.518 CC lib/vhost/vhost_rpc.o 00:03:32.776 CC lib/iscsi/init_grp.o 00:03:32.776 CC lib/iscsi/iscsi.o 00:03:32.776 CC lib/vhost/vhost_scsi.o 00:03:32.776 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:33.075 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:33.075 CC lib/iscsi/param.o 00:03:33.075 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:33.075 CC lib/vhost/vhost_blk.o 00:03:33.075 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:33.333 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:33.333 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:33.333 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:33.333 CC lib/iscsi/portal_grp.o 00:03:33.591 CC lib/vhost/rte_vhost_user.o 00:03:33.591 CC lib/iscsi/tgt_node.o 00:03:33.591 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:33.591 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:33.591 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:33.591 CC lib/iscsi/iscsi_subsystem.o 00:03:33.849 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:33.849 CC lib/iscsi/iscsi_rpc.o 00:03:33.849 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:33.849 CC lib/iscsi/task.o 00:03:33.849 CC lib/ftl/utils/ftl_conf.o 00:03:34.106 CC lib/ftl/utils/ftl_md.o 00:03:34.106 CC lib/ftl/utils/ftl_mempool.o 00:03:34.106 CC lib/ftl/utils/ftl_bitmap.o 00:03:34.106 LIB libspdk_nvmf.a 00:03:34.106 CC lib/ftl/utils/ftl_property.o 00:03:34.106 LIB libspdk_iscsi.a 00:03:34.106 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:34.106 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:34.363 SO libspdk_iscsi.so.8.0 00:03:34.363 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:34.363 SO libspdk_nvmf.so.20.0 00:03:34.363 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:34.363 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:34.363 SYMLINK libspdk_iscsi.so 00:03:34.363 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:34.363 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:34.363 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:34.363 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:34.363 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:34.363 SYMLINK libspdk_nvmf.so 00:03:34.363 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:34.621 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:34.621 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:34.621 CC lib/ftl/base/ftl_base_dev.o 00:03:34.621 LIB libspdk_vhost.a 00:03:34.621 CC lib/ftl/base/ftl_base_bdev.o 00:03:34.621 CC lib/ftl/ftl_trace.o 00:03:34.621 SO libspdk_vhost.so.8.0 00:03:34.880 SYMLINK libspdk_vhost.so 00:03:34.880 LIB libspdk_ftl.a 00:03:35.138 SO libspdk_ftl.so.9.0 00:03:35.705 SYMLINK libspdk_ftl.so 00:03:35.962 CC module/env_dpdk/env_dpdk_rpc.o 00:03:35.962 CC module/accel/error/accel_error.o 00:03:35.962 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:35.962 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:35.962 CC module/sock/posix/posix.o 00:03:35.962 CC module/fsdev/aio/fsdev_aio.o 00:03:35.963 CC module/keyring/file/keyring.o 00:03:35.963 CC module/scheduler/gscheduler/gscheduler.o 00:03:35.963 CC module/accel/ioat/accel_ioat.o 00:03:35.963 CC module/blob/bdev/blob_bdev.o 00:03:35.963 LIB libspdk_env_dpdk_rpc.a 00:03:35.963 SO libspdk_env_dpdk_rpc.so.6.0 00:03:35.963 SYMLINK libspdk_env_dpdk_rpc.so 00:03:36.240 LIB libspdk_scheduler_dpdk_governor.a 00:03:36.240 LIB libspdk_scheduler_gscheduler.a 00:03:36.240 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:36.240 SO libspdk_scheduler_gscheduler.so.4.0 00:03:36.240 LIB libspdk_scheduler_dynamic.a 00:03:36.240 CC module/accel/ioat/accel_ioat_rpc.o 00:03:36.240 CC module/accel/error/accel_error_rpc.o 00:03:36.240 SO libspdk_scheduler_dynamic.so.4.0 00:03:36.240 CC module/keyring/file/keyring_rpc.o 00:03:36.240 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:36.240 SYMLINK libspdk_scheduler_gscheduler.so 00:03:36.240 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:36.240 CC module/fsdev/aio/linux_aio_mgr.o 00:03:36.240 LIB libspdk_blob_bdev.a 00:03:36.240 SYMLINK libspdk_scheduler_dynamic.so 00:03:36.240 SO libspdk_blob_bdev.so.11.0 00:03:36.240 CC module/keyring/linux/keyring.o 00:03:36.240 LIB libspdk_accel_ioat.a 00:03:36.240 LIB libspdk_accel_error.a 00:03:36.507 LIB libspdk_keyring_file.a 00:03:36.507 SYMLINK libspdk_blob_bdev.so 00:03:36.507 SO libspdk_accel_ioat.so.6.0 00:03:36.507 SO libspdk_accel_error.so.2.0 00:03:36.507 SO libspdk_keyring_file.so.2.0 00:03:36.507 SYMLINK libspdk_accel_ioat.so 00:03:36.507 SYMLINK libspdk_accel_error.so 00:03:36.507 CC module/keyring/linux/keyring_rpc.o 00:03:36.507 SYMLINK libspdk_keyring_file.so 00:03:36.507 CC module/accel/dsa/accel_dsa.o 00:03:36.507 CC module/accel/dsa/accel_dsa_rpc.o 00:03:36.507 LIB libspdk_keyring_linux.a 00:03:36.507 CC module/accel/iaa/accel_iaa.o 00:03:36.507 SO libspdk_keyring_linux.so.1.0 00:03:36.766 LIB libspdk_fsdev_aio.a 00:03:36.766 CC module/bdev/delay/vbdev_delay.o 00:03:36.766 CC module/bdev/gpt/gpt.o 00:03:36.766 CC module/bdev/error/vbdev_error.o 00:03:36.766 SYMLINK libspdk_keyring_linux.so 00:03:36.766 CC module/bdev/error/vbdev_error_rpc.o 00:03:36.766 LIB libspdk_sock_posix.a 00:03:36.766 CC module/blobfs/bdev/blobfs_bdev.o 00:03:36.766 SO libspdk_fsdev_aio.so.1.0 00:03:36.766 SO libspdk_sock_posix.so.6.0 00:03:36.766 SYMLINK libspdk_fsdev_aio.so 00:03:36.766 LIB libspdk_accel_dsa.a 00:03:36.766 CC module/bdev/lvol/vbdev_lvol.o 00:03:36.766 SYMLINK libspdk_sock_posix.so 00:03:36.766 CC module/accel/iaa/accel_iaa_rpc.o 00:03:36.766 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:36.766 SO libspdk_accel_dsa.so.5.0 00:03:36.766 CC module/bdev/gpt/vbdev_gpt.o 00:03:36.766 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:36.766 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:37.024 SYMLINK libspdk_accel_dsa.so 00:03:37.024 CC module/bdev/malloc/bdev_malloc.o 00:03:37.024 LIB libspdk_accel_iaa.a 00:03:37.024 LIB libspdk_bdev_error.a 00:03:37.024 SO libspdk_accel_iaa.so.3.0 00:03:37.024 SO libspdk_bdev_error.so.6.0 00:03:37.024 LIB libspdk_blobfs_bdev.a 00:03:37.024 SO libspdk_blobfs_bdev.so.6.0 00:03:37.024 CC module/bdev/null/bdev_null.o 00:03:37.024 SYMLINK libspdk_accel_iaa.so 00:03:37.024 SYMLINK libspdk_bdev_error.so 00:03:37.024 LIB libspdk_bdev_delay.a 00:03:37.024 SYMLINK libspdk_blobfs_bdev.so 00:03:37.024 SO libspdk_bdev_delay.so.6.0 00:03:37.282 LIB libspdk_bdev_gpt.a 00:03:37.282 SO libspdk_bdev_gpt.so.6.0 00:03:37.282 SYMLINK libspdk_bdev_delay.so 00:03:37.282 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:37.282 CC module/bdev/raid/bdev_raid.o 00:03:37.282 CC module/bdev/nvme/bdev_nvme.o 00:03:37.282 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.282 SYMLINK libspdk_bdev_gpt.so 00:03:37.282 CC module/bdev/raid/bdev_raid_rpc.o 00:03:37.282 CC module/bdev/split/vbdev_split.o 00:03:37.282 CC module/bdev/null/bdev_null_rpc.o 00:03:37.282 CC module/bdev/split/vbdev_split_rpc.o 00:03:37.282 LIB libspdk_bdev_lvol.a 00:03:37.541 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:37.541 SO libspdk_bdev_lvol.so.6.0 00:03:37.541 LIB libspdk_bdev_malloc.a 00:03:37.541 SYMLINK libspdk_bdev_lvol.so 00:03:37.541 CC module/bdev/raid/bdev_raid_sb.o 00:03:37.541 SO libspdk_bdev_malloc.so.6.0 00:03:37.541 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:37.541 LIB libspdk_bdev_null.a 00:03:37.541 CC module/bdev/raid/raid0.o 00:03:37.541 SYMLINK libspdk_bdev_malloc.so 00:03:37.541 SO libspdk_bdev_null.so.6.0 00:03:37.541 LIB libspdk_bdev_split.a 00:03:37.541 CC module/bdev/raid/raid1.o 00:03:37.541 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:37.541 SO libspdk_bdev_split.so.6.0 00:03:37.799 SYMLINK libspdk_bdev_null.so 00:03:37.799 CC module/bdev/raid/concat.o 00:03:37.799 SYMLINK libspdk_bdev_split.so 00:03:37.799 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:37.799 LIB libspdk_bdev_passthru.a 00:03:37.799 SO libspdk_bdev_passthru.so.6.0 00:03:37.799 CC module/bdev/nvme/nvme_rpc.o 00:03:38.057 SYMLINK libspdk_bdev_passthru.so 00:03:38.057 CC module/bdev/aio/bdev_aio.o 00:03:38.057 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.057 LIB libspdk_bdev_zone_block.a 00:03:38.057 CC module/bdev/iscsi/bdev_iscsi.o 00:03:38.057 CC module/bdev/ftl/bdev_ftl.o 00:03:38.057 SO libspdk_bdev_zone_block.so.6.0 00:03:38.057 SYMLINK libspdk_bdev_zone_block.so 00:03:38.057 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.057 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:38.057 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:38.057 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:38.315 CC module/bdev/nvme/vbdev_opal.o 00:03:38.315 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:38.315 LIB libspdk_bdev_aio.a 00:03:38.315 SO libspdk_bdev_aio.so.6.0 00:03:38.315 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:38.315 LIB libspdk_bdev_raid.a 00:03:38.315 SYMLINK libspdk_bdev_aio.so 00:03:38.315 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:38.315 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:38.315 SO libspdk_bdev_raid.so.6.0 00:03:38.573 SYMLINK libspdk_bdev_raid.so 00:03:38.573 LIB libspdk_bdev_ftl.a 00:03:38.573 LIB libspdk_bdev_iscsi.a 00:03:38.573 SO libspdk_bdev_iscsi.so.6.0 00:03:38.573 SO libspdk_bdev_ftl.so.6.0 00:03:38.573 LIB libspdk_bdev_virtio.a 00:03:38.573 SYMLINK libspdk_bdev_iscsi.so 00:03:38.574 SYMLINK libspdk_bdev_ftl.so 00:03:38.574 SO libspdk_bdev_virtio.so.6.0 00:03:38.832 SYMLINK libspdk_bdev_virtio.so 00:03:40.210 LIB libspdk_bdev_nvme.a 00:03:40.210 SO libspdk_bdev_nvme.so.7.1 00:03:40.210 SYMLINK libspdk_bdev_nvme.so 00:03:40.776 CC module/event/subsystems/keyring/keyring.o 00:03:40.776 CC module/event/subsystems/iobuf/iobuf.o 00:03:40.776 CC module/event/subsystems/vmd/vmd.o 00:03:40.776 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:40.776 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:40.776 CC module/event/subsystems/scheduler/scheduler.o 00:03:40.776 CC module/event/subsystems/sock/sock.o 00:03:40.776 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:40.776 CC module/event/subsystems/fsdev/fsdev.o 00:03:40.776 LIB libspdk_event_sock.a 00:03:40.776 LIB libspdk_event_keyring.a 00:03:40.776 LIB libspdk_event_vhost_blk.a 00:03:40.776 LIB libspdk_event_scheduler.a 00:03:40.776 LIB libspdk_event_vmd.a 00:03:40.776 LIB libspdk_event_iobuf.a 00:03:40.776 SO libspdk_event_keyring.so.1.0 00:03:40.776 SO libspdk_event_sock.so.5.0 00:03:40.776 LIB libspdk_event_fsdev.a 00:03:40.776 SO libspdk_event_vhost_blk.so.3.0 00:03:40.776 SO libspdk_event_scheduler.so.4.0 00:03:40.776 SO libspdk_event_vmd.so.6.0 00:03:40.776 SO libspdk_event_fsdev.so.1.0 00:03:40.776 SO libspdk_event_iobuf.so.3.0 00:03:40.776 SYMLINK libspdk_event_keyring.so 00:03:40.776 SYMLINK libspdk_event_sock.so 00:03:40.776 SYMLINK libspdk_event_vhost_blk.so 00:03:41.034 SYMLINK libspdk_event_scheduler.so 00:03:41.034 SYMLINK libspdk_event_vmd.so 00:03:41.034 SYMLINK libspdk_event_fsdev.so 00:03:41.034 SYMLINK libspdk_event_iobuf.so 00:03:41.292 CC module/event/subsystems/accel/accel.o 00:03:41.292 LIB libspdk_event_accel.a 00:03:41.292 SO libspdk_event_accel.so.6.0 00:03:41.552 SYMLINK libspdk_event_accel.so 00:03:41.811 CC module/event/subsystems/bdev/bdev.o 00:03:42.070 LIB libspdk_event_bdev.a 00:03:42.070 SO libspdk_event_bdev.so.6.0 00:03:42.070 SYMLINK libspdk_event_bdev.so 00:03:42.328 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.328 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.328 CC module/event/subsystems/scsi/scsi.o 00:03:42.328 CC module/event/subsystems/ublk/ublk.o 00:03:42.328 CC module/event/subsystems/nbd/nbd.o 00:03:42.328 LIB libspdk_event_ublk.a 00:03:42.328 LIB libspdk_event_nbd.a 00:03:42.587 SO libspdk_event_ublk.so.3.0 00:03:42.587 SO libspdk_event_nbd.so.6.0 00:03:42.587 LIB libspdk_event_scsi.a 00:03:42.587 SO libspdk_event_scsi.so.6.0 00:03:42.587 SYMLINK libspdk_event_ublk.so 00:03:42.587 LIB libspdk_event_nvmf.a 00:03:42.587 SYMLINK libspdk_event_nbd.so 00:03:42.587 SO libspdk_event_nvmf.so.6.0 00:03:42.587 SYMLINK libspdk_event_scsi.so 00:03:42.587 SYMLINK libspdk_event_nvmf.so 00:03:42.846 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:42.846 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.106 LIB libspdk_event_vhost_scsi.a 00:03:43.106 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.106 LIB libspdk_event_iscsi.a 00:03:43.106 SO libspdk_event_iscsi.so.6.0 00:03:43.106 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.106 SYMLINK libspdk_event_iscsi.so 00:03:43.364 SO libspdk.so.6.0 00:03:43.365 SYMLINK libspdk.so 00:03:43.624 CC app/trace_record/trace_record.o 00:03:43.624 CXX app/trace/trace.o 00:03:43.624 TEST_HEADER include/spdk/accel.h 00:03:43.624 TEST_HEADER include/spdk/accel_module.h 00:03:43.624 TEST_HEADER include/spdk/assert.h 00:03:43.624 TEST_HEADER include/spdk/barrier.h 00:03:43.624 TEST_HEADER include/spdk/base64.h 00:03:43.624 TEST_HEADER include/spdk/bdev.h 00:03:43.624 TEST_HEADER include/spdk/bdev_module.h 00:03:43.624 TEST_HEADER include/spdk/bdev_zone.h 00:03:43.624 TEST_HEADER include/spdk/bit_array.h 00:03:43.624 TEST_HEADER include/spdk/bit_pool.h 00:03:43.624 TEST_HEADER include/spdk/blob_bdev.h 00:03:43.624 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:43.624 TEST_HEADER include/spdk/blobfs.h 00:03:43.624 TEST_HEADER include/spdk/blob.h 00:03:43.624 TEST_HEADER include/spdk/conf.h 00:03:43.624 TEST_HEADER include/spdk/config.h 00:03:43.624 TEST_HEADER include/spdk/cpuset.h 00:03:43.624 TEST_HEADER include/spdk/crc16.h 00:03:43.624 TEST_HEADER include/spdk/crc32.h 00:03:43.624 TEST_HEADER include/spdk/crc64.h 00:03:43.624 TEST_HEADER include/spdk/dif.h 00:03:43.624 TEST_HEADER include/spdk/dma.h 00:03:43.624 TEST_HEADER include/spdk/endian.h 00:03:43.624 TEST_HEADER include/spdk/env_dpdk.h 00:03:43.624 TEST_HEADER include/spdk/env.h 00:03:43.624 TEST_HEADER include/spdk/event.h 00:03:43.624 TEST_HEADER include/spdk/fd_group.h 00:03:43.624 TEST_HEADER include/spdk/fd.h 00:03:43.624 TEST_HEADER include/spdk/file.h 00:03:43.624 TEST_HEADER include/spdk/fsdev.h 00:03:43.624 TEST_HEADER include/spdk/fsdev_module.h 00:03:43.624 TEST_HEADER include/spdk/ftl.h 00:03:43.624 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:43.624 TEST_HEADER include/spdk/gpt_spec.h 00:03:43.624 CC examples/ioat/perf/perf.o 00:03:43.624 TEST_HEADER include/spdk/hexlify.h 00:03:43.624 TEST_HEADER include/spdk/histogram_data.h 00:03:43.624 TEST_HEADER include/spdk/idxd.h 00:03:43.624 TEST_HEADER include/spdk/idxd_spec.h 00:03:43.624 TEST_HEADER include/spdk/init.h 00:03:43.624 TEST_HEADER include/spdk/ioat.h 00:03:43.624 TEST_HEADER include/spdk/ioat_spec.h 00:03:43.624 TEST_HEADER include/spdk/iscsi_spec.h 00:03:43.624 CC examples/util/zipf/zipf.o 00:03:43.624 TEST_HEADER include/spdk/json.h 00:03:43.624 TEST_HEADER include/spdk/jsonrpc.h 00:03:43.624 TEST_HEADER include/spdk/keyring.h 00:03:43.624 TEST_HEADER include/spdk/keyring_module.h 00:03:43.624 TEST_HEADER include/spdk/likely.h 00:03:43.624 CC test/thread/poller_perf/poller_perf.o 00:03:43.624 TEST_HEADER include/spdk/log.h 00:03:43.624 TEST_HEADER include/spdk/lvol.h 00:03:43.624 TEST_HEADER include/spdk/md5.h 00:03:43.624 CC test/dma/test_dma/test_dma.o 00:03:43.624 TEST_HEADER include/spdk/memory.h 00:03:43.624 TEST_HEADER include/spdk/mmio.h 00:03:43.624 TEST_HEADER include/spdk/nbd.h 00:03:43.624 TEST_HEADER include/spdk/net.h 00:03:43.624 TEST_HEADER include/spdk/notify.h 00:03:43.624 TEST_HEADER include/spdk/nvme.h 00:03:43.624 TEST_HEADER include/spdk/nvme_intel.h 00:03:43.624 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:43.624 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:43.624 TEST_HEADER include/spdk/nvme_spec.h 00:03:43.624 CC test/app/bdev_svc/bdev_svc.o 00:03:43.624 TEST_HEADER include/spdk/nvme_zns.h 00:03:43.624 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:43.624 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:43.624 TEST_HEADER include/spdk/nvmf.h 00:03:43.883 TEST_HEADER include/spdk/nvmf_spec.h 00:03:43.883 TEST_HEADER include/spdk/nvmf_transport.h 00:03:43.883 TEST_HEADER include/spdk/opal.h 00:03:43.883 TEST_HEADER include/spdk/opal_spec.h 00:03:43.883 TEST_HEADER include/spdk/pci_ids.h 00:03:43.883 TEST_HEADER include/spdk/pipe.h 00:03:43.883 TEST_HEADER include/spdk/queue.h 00:03:43.883 TEST_HEADER include/spdk/reduce.h 00:03:43.883 TEST_HEADER include/spdk/rpc.h 00:03:43.883 TEST_HEADER include/spdk/scheduler.h 00:03:43.883 TEST_HEADER include/spdk/scsi.h 00:03:43.883 CC test/env/mem_callbacks/mem_callbacks.o 00:03:43.883 TEST_HEADER include/spdk/scsi_spec.h 00:03:43.883 TEST_HEADER include/spdk/sock.h 00:03:43.883 TEST_HEADER include/spdk/stdinc.h 00:03:43.883 TEST_HEADER include/spdk/string.h 00:03:43.883 TEST_HEADER include/spdk/thread.h 00:03:43.883 TEST_HEADER include/spdk/trace.h 00:03:43.883 TEST_HEADER include/spdk/trace_parser.h 00:03:43.883 TEST_HEADER include/spdk/tree.h 00:03:43.883 TEST_HEADER include/spdk/ublk.h 00:03:43.883 TEST_HEADER include/spdk/util.h 00:03:43.883 TEST_HEADER include/spdk/uuid.h 00:03:43.884 TEST_HEADER include/spdk/version.h 00:03:43.884 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:43.884 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:43.884 TEST_HEADER include/spdk/vhost.h 00:03:43.884 TEST_HEADER include/spdk/vmd.h 00:03:43.884 TEST_HEADER include/spdk/xor.h 00:03:43.884 TEST_HEADER include/spdk/zipf.h 00:03:43.884 CXX test/cpp_headers/accel.o 00:03:43.884 LINK spdk_trace_record 00:03:43.884 LINK poller_perf 00:03:43.884 LINK zipf 00:03:43.884 LINK ioat_perf 00:03:43.884 LINK bdev_svc 00:03:44.142 CXX test/cpp_headers/accel_module.o 00:03:44.142 LINK spdk_trace 00:03:44.142 CC test/rpc_client/rpc_client_test.o 00:03:44.142 CC examples/ioat/verify/verify.o 00:03:44.142 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.142 LINK test_dma 00:03:44.142 CC test/event/event_perf/event_perf.o 00:03:44.142 CXX test/cpp_headers/assert.o 00:03:44.402 CC test/event/reactor/reactor.o 00:03:44.402 LINK rpc_client_test 00:03:44.402 CC app/nvmf_tgt/nvmf_main.o 00:03:44.402 LINK event_perf 00:03:44.402 LINK verify 00:03:44.402 CXX test/cpp_headers/barrier.o 00:03:44.402 LINK reactor 00:03:44.402 LINK mem_callbacks 00:03:44.661 CC test/event/reactor_perf/reactor_perf.o 00:03:44.661 LINK nvmf_tgt 00:03:44.661 CC test/app/histogram_perf/histogram_perf.o 00:03:44.661 CXX test/cpp_headers/base64.o 00:03:44.661 LINK nvme_fuzz 00:03:44.661 CC test/app/jsoncat/jsoncat.o 00:03:44.662 CC test/app/stub/stub.o 00:03:44.662 CC test/event/app_repeat/app_repeat.o 00:03:44.662 CC test/env/vtophys/vtophys.o 00:03:44.662 LINK reactor_perf 00:03:44.922 LINK histogram_perf 00:03:44.922 CXX test/cpp_headers/bdev.o 00:03:44.922 LINK jsoncat 00:03:44.922 LINK stub 00:03:44.922 LINK vtophys 00:03:44.922 LINK app_repeat 00:03:45.182 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:45.182 CC app/iscsi_tgt/iscsi_tgt.o 00:03:45.182 CXX test/cpp_headers/bdev_module.o 00:03:45.182 CXX test/cpp_headers/bdev_zone.o 00:03:45.182 CC test/event/scheduler/scheduler.o 00:03:45.182 CC test/accel/dif/dif.o 00:03:45.182 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:45.182 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:45.523 CC test/blobfs/mkfs/mkfs.o 00:03:45.523 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:45.523 LINK iscsi_tgt 00:03:45.523 CXX test/cpp_headers/bit_array.o 00:03:45.523 LINK env_dpdk_post_init 00:03:45.523 LINK scheduler 00:03:45.523 CXX test/cpp_headers/bit_pool.o 00:03:45.817 LINK mkfs 00:03:45.817 CC test/lvol/esnap/esnap.o 00:03:45.817 CC test/env/memory/memory_ut.o 00:03:45.817 CXX test/cpp_headers/blob_bdev.o 00:03:45.817 LINK vhost_fuzz 00:03:45.817 CC app/spdk_lspci/spdk_lspci.o 00:03:45.817 CC app/spdk_tgt/spdk_tgt.o 00:03:46.078 CC test/nvme/aer/aer.o 00:03:46.078 LINK dif 00:03:46.078 LINK spdk_lspci 00:03:46.078 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.078 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:46.078 LINK spdk_tgt 00:03:46.337 CXX test/cpp_headers/blobfs.o 00:03:46.337 CC test/env/pci/pci_ut.o 00:03:46.337 LINK aer 00:03:46.337 CC app/spdk_nvme_perf/perf.o 00:03:46.337 LINK interrupt_tgt 00:03:46.337 CXX test/cpp_headers/blob.o 00:03:46.596 CXX test/cpp_headers/conf.o 00:03:46.596 CC test/nvme/reset/reset.o 00:03:46.596 CC app/spdk_nvme_identify/identify.o 00:03:46.596 CC app/spdk_nvme_discover/discovery_aer.o 00:03:46.854 CXX test/cpp_headers/config.o 00:03:46.854 CXX test/cpp_headers/cpuset.o 00:03:46.854 LINK spdk_nvme_discover 00:03:46.854 LINK pci_ut 00:03:46.854 LINK reset 00:03:47.112 LINK iscsi_fuzz 00:03:47.112 CXX test/cpp_headers/crc16.o 00:03:47.112 LINK memory_ut 00:03:47.112 CC app/spdk_top/spdk_top.o 00:03:47.112 CC test/nvme/sgl/sgl.o 00:03:47.371 CC test/nvme/e2edp/nvme_dp.o 00:03:47.371 CXX test/cpp_headers/crc32.o 00:03:47.371 CXX test/cpp_headers/crc64.o 00:03:47.371 LINK spdk_nvme_perf 00:03:47.371 CXX test/cpp_headers/dif.o 00:03:47.371 CC test/nvme/overhead/overhead.o 00:03:47.629 LINK sgl 00:03:47.629 LINK spdk_nvme_identify 00:03:47.629 CXX test/cpp_headers/dma.o 00:03:47.629 LINK nvme_dp 00:03:47.629 CXX test/cpp_headers/endian.o 00:03:47.629 CC test/bdev/bdevio/bdevio.o 00:03:47.629 CXX test/cpp_headers/env_dpdk.o 00:03:47.629 CXX test/cpp_headers/env.o 00:03:47.888 LINK overhead 00:03:47.888 CC test/nvme/err_injection/err_injection.o 00:03:47.888 CC examples/thread/thread/thread_ex.o 00:03:48.147 CXX test/cpp_headers/event.o 00:03:48.147 CC test/nvme/startup/startup.o 00:03:48.405 CC app/vhost/vhost.o 00:03:48.405 LINK bdevio 00:03:48.405 LINK err_injection 00:03:48.405 CC examples/sock/hello_world/hello_sock.o 00:03:48.405 LINK spdk_top 00:03:48.405 CXX test/cpp_headers/fd_group.o 00:03:48.405 LINK thread 00:03:48.405 LINK startup 00:03:48.664 LINK vhost 00:03:48.664 CC test/nvme/reserve/reserve.o 00:03:48.664 LINK hello_sock 00:03:48.664 CXX test/cpp_headers/fd.o 00:03:48.923 CC app/spdk_dd/spdk_dd.o 00:03:48.923 CC test/nvme/simple_copy/simple_copy.o 00:03:48.923 CC app/fio/nvme/fio_plugin.o 00:03:48.923 CXX test/cpp_headers/file.o 00:03:48.923 LINK reserve 00:03:48.923 CC examples/vmd/lsvmd/lsvmd.o 00:03:49.182 CC examples/idxd/perf/perf.o 00:03:49.182 CXX test/cpp_headers/fsdev.o 00:03:49.182 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:49.182 LINK lsvmd 00:03:49.182 LINK simple_copy 00:03:49.439 CXX test/cpp_headers/fsdev_module.o 00:03:49.439 LINK spdk_dd 00:03:49.439 CC examples/accel/perf/accel_perf.o 00:03:49.439 LINK idxd_perf 00:03:49.439 LINK spdk_nvme 00:03:49.439 LINK hello_fsdev 00:03:49.698 CC examples/vmd/led/led.o 00:03:49.698 CC test/nvme/connect_stress/connect_stress.o 00:03:49.698 CXX test/cpp_headers/ftl.o 00:03:49.698 CC test/nvme/boot_partition/boot_partition.o 00:03:49.698 LINK led 00:03:49.698 CC app/fio/bdev/fio_plugin.o 00:03:49.698 CXX test/cpp_headers/fuse_dispatcher.o 00:03:49.698 LINK connect_stress 00:03:49.956 LINK boot_partition 00:03:49.956 CC examples/blob/hello_world/hello_blob.o 00:03:49.956 CC examples/nvme/hello_world/hello_world.o 00:03:49.956 CXX test/cpp_headers/gpt_spec.o 00:03:49.956 LINK accel_perf 00:03:49.956 CC test/nvme/compliance/nvme_compliance.o 00:03:49.956 CC test/nvme/fused_ordering/fused_ordering.o 00:03:50.214 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:50.214 LINK hello_blob 00:03:50.214 CXX test/cpp_headers/hexlify.o 00:03:50.214 LINK hello_world 00:03:50.214 LINK fused_ordering 00:03:50.214 LINK spdk_bdev 00:03:50.473 LINK doorbell_aers 00:03:50.473 LINK nvme_compliance 00:03:50.473 CXX test/cpp_headers/histogram_data.o 00:03:50.473 CXX test/cpp_headers/idxd.o 00:03:50.473 CC examples/blob/cli/blobcli.o 00:03:50.473 CXX test/cpp_headers/idxd_spec.o 00:03:50.473 CC examples/nvme/reconnect/reconnect.o 00:03:50.473 CXX test/cpp_headers/init.o 00:03:50.473 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:50.731 CXX test/cpp_headers/ioat.o 00:03:50.731 CXX test/cpp_headers/ioat_spec.o 00:03:50.731 CC test/nvme/fdp/fdp.o 00:03:50.731 CXX test/cpp_headers/iscsi_spec.o 00:03:50.990 LINK reconnect 00:03:50.990 CC examples/bdev/bdevperf/bdevperf.o 00:03:50.990 LINK blobcli 00:03:50.990 CC examples/bdev/hello_world/hello_bdev.o 00:03:50.990 CC examples/nvme/arbitration/arbitration.o 00:03:50.990 CXX test/cpp_headers/json.o 00:03:50.990 LINK nvme_manage 00:03:50.990 CXX test/cpp_headers/jsonrpc.o 00:03:51.248 CXX test/cpp_headers/keyring.o 00:03:51.248 LINK fdp 00:03:51.248 LINK hello_bdev 00:03:51.248 CXX test/cpp_headers/keyring_module.o 00:03:51.248 CC examples/nvme/hotplug/hotplug.o 00:03:51.248 LINK arbitration 00:03:51.506 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:51.506 CC test/nvme/cuse/cuse.o 00:03:51.506 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:51.506 CC examples/nvme/abort/abort.o 00:03:51.506 CXX test/cpp_headers/likely.o 00:03:51.506 CXX test/cpp_headers/log.o 00:03:51.506 CXX test/cpp_headers/lvol.o 00:03:51.506 LINK esnap 00:03:51.506 LINK cmb_copy 00:03:51.506 LINK hotplug 00:03:51.506 LINK pmr_persistence 00:03:51.506 CXX test/cpp_headers/md5.o 00:03:51.763 CXX test/cpp_headers/memory.o 00:03:51.763 CXX test/cpp_headers/mmio.o 00:03:51.763 LINK bdevperf 00:03:51.763 CXX test/cpp_headers/nbd.o 00:03:51.763 CXX test/cpp_headers/net.o 00:03:51.763 CXX test/cpp_headers/notify.o 00:03:51.763 CXX test/cpp_headers/nvme.o 00:03:51.763 LINK abort 00:03:51.763 CXX test/cpp_headers/nvme_intel.o 00:03:52.021 CXX test/cpp_headers/nvme_ocssd.o 00:03:52.021 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.021 CXX test/cpp_headers/nvme_spec.o 00:03:52.021 CXX test/cpp_headers/nvme_zns.o 00:03:52.021 CXX test/cpp_headers/nvmf_cmd.o 00:03:52.021 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:52.021 CXX test/cpp_headers/nvmf.o 00:03:52.021 CXX test/cpp_headers/nvmf_spec.o 00:03:52.021 CXX test/cpp_headers/nvmf_transport.o 00:03:52.021 CXX test/cpp_headers/opal.o 00:03:52.279 CXX test/cpp_headers/opal_spec.o 00:03:52.279 CXX test/cpp_headers/pci_ids.o 00:03:52.279 CC examples/nvmf/nvmf/nvmf.o 00:03:52.279 CXX test/cpp_headers/pipe.o 00:03:52.279 CXX test/cpp_headers/queue.o 00:03:52.279 CXX test/cpp_headers/reduce.o 00:03:52.279 CXX test/cpp_headers/rpc.o 00:03:52.279 CXX test/cpp_headers/scheduler.o 00:03:52.279 CXX test/cpp_headers/scsi.o 00:03:52.279 CXX test/cpp_headers/scsi_spec.o 00:03:52.279 CXX test/cpp_headers/sock.o 00:03:52.279 CXX test/cpp_headers/stdinc.o 00:03:52.537 CXX test/cpp_headers/string.o 00:03:52.537 CXX test/cpp_headers/thread.o 00:03:52.537 CXX test/cpp_headers/trace.o 00:03:52.537 CXX test/cpp_headers/trace_parser.o 00:03:52.537 CXX test/cpp_headers/tree.o 00:03:52.537 CXX test/cpp_headers/ublk.o 00:03:52.537 CXX test/cpp_headers/util.o 00:03:52.537 CXX test/cpp_headers/uuid.o 00:03:52.537 LINK nvmf 00:03:52.537 CXX test/cpp_headers/version.o 00:03:52.537 CXX test/cpp_headers/vfio_user_pci.o 00:03:52.537 CXX test/cpp_headers/vfio_user_spec.o 00:03:52.537 CXX test/cpp_headers/vhost.o 00:03:52.537 CXX test/cpp_headers/vmd.o 00:03:52.837 CXX test/cpp_headers/xor.o 00:03:52.837 CXX test/cpp_headers/zipf.o 00:03:52.837 LINK cuse 00:03:53.116 00:03:53.116 real 1m35.512s 00:03:53.116 user 8m50.418s 00:03:53.116 sys 1m47.942s 00:03:53.116 08:57:31 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:53.116 08:57:31 make -- common/autotest_common.sh@10 -- $ set +x 00:03:53.116 ************************************ 00:03:53.116 END TEST make 00:03:53.116 ************************************ 00:03:53.116 08:57:31 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:53.116 08:57:31 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:53.116 08:57:31 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:53.116 08:57:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.116 08:57:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:53.116 08:57:31 -- pm/common@44 -- $ pid=5413 00:03:53.116 08:57:31 -- pm/common@50 -- $ kill -TERM 5413 00:03:53.116 08:57:31 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.116 08:57:31 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:53.116 08:57:31 -- pm/common@44 -- $ pid=5415 00:03:53.116 08:57:31 -- pm/common@50 -- $ kill -TERM 5415 00:03:53.116 08:57:31 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:53.116 08:57:31 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:53.116 08:57:31 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:53.116 08:57:31 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:53.116 08:57:31 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:53.116 08:57:31 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:53.116 08:57:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.116 08:57:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.116 08:57:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.116 08:57:31 -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.116 08:57:31 -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.116 08:57:31 -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.116 08:57:31 -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.116 08:57:31 -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.117 08:57:31 -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.117 08:57:31 -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.117 08:57:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.117 08:57:31 -- scripts/common.sh@344 -- # case "$op" in 00:03:53.117 08:57:31 -- scripts/common.sh@345 -- # : 1 00:03:53.117 08:57:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.117 08:57:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.117 08:57:31 -- scripts/common.sh@365 -- # decimal 1 00:03:53.117 08:57:31 -- scripts/common.sh@353 -- # local d=1 00:03:53.117 08:57:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.117 08:57:31 -- scripts/common.sh@355 -- # echo 1 00:03:53.117 08:57:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.117 08:57:31 -- scripts/common.sh@366 -- # decimal 2 00:03:53.117 08:57:31 -- scripts/common.sh@353 -- # local d=2 00:03:53.117 08:57:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.117 08:57:31 -- scripts/common.sh@355 -- # echo 2 00:03:53.117 08:57:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.117 08:57:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.117 08:57:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.117 08:57:31 -- scripts/common.sh@368 -- # return 0 00:03:53.117 08:57:31 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.117 08:57:31 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:53.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.117 --rc genhtml_branch_coverage=1 00:03:53.117 --rc genhtml_function_coverage=1 00:03:53.117 --rc genhtml_legend=1 00:03:53.117 --rc geninfo_all_blocks=1 00:03:53.117 --rc geninfo_unexecuted_blocks=1 00:03:53.117 00:03:53.117 ' 00:03:53.117 08:57:31 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:53.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.117 --rc genhtml_branch_coverage=1 00:03:53.117 --rc genhtml_function_coverage=1 00:03:53.117 --rc genhtml_legend=1 00:03:53.117 --rc geninfo_all_blocks=1 00:03:53.117 --rc geninfo_unexecuted_blocks=1 00:03:53.117 00:03:53.117 ' 00:03:53.117 08:57:31 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:53.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.117 --rc genhtml_branch_coverage=1 00:03:53.117 --rc genhtml_function_coverage=1 00:03:53.117 --rc genhtml_legend=1 00:03:53.117 --rc geninfo_all_blocks=1 00:03:53.117 --rc geninfo_unexecuted_blocks=1 00:03:53.117 00:03:53.117 ' 00:03:53.117 08:57:31 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:53.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.117 --rc genhtml_branch_coverage=1 00:03:53.117 --rc genhtml_function_coverage=1 00:03:53.117 --rc genhtml_legend=1 00:03:53.117 --rc geninfo_all_blocks=1 00:03:53.117 --rc geninfo_unexecuted_blocks=1 00:03:53.117 00:03:53.117 ' 00:03:53.117 08:57:31 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:53.117 08:57:31 -- nvmf/common.sh@7 -- # uname -s 00:03:53.117 08:57:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.117 08:57:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.117 08:57:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.117 08:57:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.117 08:57:31 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.117 08:57:31 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:53.117 08:57:31 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.117 08:57:31 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:53.117 08:57:32 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:03:53.117 08:57:32 -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:03:53.117 08:57:32 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.117 08:57:32 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:53.117 08:57:32 -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:03:53.117 08:57:32 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:53.117 08:57:32 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:53.117 08:57:32 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:53.117 08:57:32 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.117 08:57:32 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.117 08:57:32 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.117 08:57:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.117 08:57:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.117 08:57:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.117 08:57:32 -- paths/export.sh@5 -- # export PATH 00:03:53.117 08:57:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.117 08:57:32 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:03:53.117 08:57:32 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:53.117 08:57:32 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:53.117 08:57:32 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:53.117 08:57:32 -- nvmf/common.sh@50 -- # : 0 00:03:53.117 08:57:32 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:53.117 08:57:32 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:53.117 08:57:32 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:53.117 08:57:32 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.117 08:57:32 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.117 08:57:32 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:53.117 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:53.117 08:57:32 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:53.117 08:57:32 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:53.117 08:57:32 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:53.117 08:57:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:53.117 08:57:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:53.117 08:57:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:53.117 08:57:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:53.117 08:57:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:53.117 08:57:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:53.117 08:57:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:53.117 08:57:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:53.376 08:57:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:53.376 08:57:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:53.376 08:57:32 -- spdk/autotest.sh@48 -- # udevadm_pid=56297 00:03:53.376 08:57:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:53.376 08:57:32 -- pm/common@17 -- # local monitor 00:03:53.376 08:57:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.376 08:57:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:53.376 08:57:32 -- pm/common@25 -- # sleep 1 00:03:53.376 08:57:32 -- pm/common@21 -- # date +%s 00:03:53.376 08:57:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:53.376 08:57:32 -- pm/common@21 -- # date +%s 00:03:53.376 08:57:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732093052 00:03:53.376 08:57:32 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732093052 00:03:53.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732093052_collect-vmstat.pm.log 00:03:53.376 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732093052_collect-cpu-load.pm.log 00:03:54.311 08:57:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:54.311 08:57:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:54.311 08:57:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.311 08:57:33 -- common/autotest_common.sh@10 -- # set +x 00:03:54.311 08:57:33 -- spdk/autotest.sh@59 -- # create_test_list 00:03:54.311 08:57:33 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:54.311 08:57:33 -- common/autotest_common.sh@10 -- # set +x 00:03:54.311 08:57:33 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:54.311 08:57:33 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:54.311 08:57:33 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:54.311 08:57:33 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:54.311 08:57:33 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:54.311 08:57:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:54.311 08:57:33 -- common/autotest_common.sh@1457 -- # uname 00:03:54.311 08:57:33 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:54.311 08:57:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:54.311 08:57:33 -- common/autotest_common.sh@1477 -- # uname 00:03:54.311 08:57:33 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:54.311 08:57:33 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:54.311 08:57:33 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:54.311 lcov: LCOV version 1.15 00:03:54.311 08:57:33 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:12.424 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:12.424 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:30.544 08:58:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:30.544 08:58:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.544 08:58:07 -- common/autotest_common.sh@10 -- # set +x 00:04:30.544 08:58:07 -- spdk/autotest.sh@78 -- # rm -f 00:04:30.544 08:58:07 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.544 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:30.544 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:30.544 08:58:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:30.544 08:58:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:30.544 08:58:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:30.544 08:58:08 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:30.544 08:58:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.544 08:58:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:30.544 08:58:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:30.544 08:58:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.544 08:58:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.544 08:58:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.544 08:58:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:30.544 08:58:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:30.544 08:58:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:30.544 08:58:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.544 08:58:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.544 08:58:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:30.544 08:58:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:30.544 08:58:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:30.544 08:58:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.544 08:58:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:30.544 08:58:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:30.544 08:58:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:30.544 08:58:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:30.544 08:58:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:30.544 08:58:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:30.544 08:58:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.544 08:58:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.544 08:58:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:30.544 08:58:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:30.544 08:58:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:30.544 No valid GPT data, bailing 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # pt= 00:04:30.544 08:58:08 -- scripts/common.sh@395 -- # return 1 00:04:30.544 08:58:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:30.544 1+0 records in 00:04:30.544 1+0 records out 00:04:30.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379763 s, 276 MB/s 00:04:30.544 08:58:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.544 08:58:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.544 08:58:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:30.544 08:58:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:30.544 08:58:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:30.544 No valid GPT data, bailing 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # pt= 00:04:30.544 08:58:08 -- scripts/common.sh@395 -- # return 1 00:04:30.544 08:58:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:30.544 1+0 records in 00:04:30.544 1+0 records out 00:04:30.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491603 s, 213 MB/s 00:04:30.544 08:58:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.544 08:58:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.544 08:58:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:30.544 08:58:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:30.544 08:58:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:30.544 No valid GPT data, bailing 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # pt= 00:04:30.544 08:58:08 -- scripts/common.sh@395 -- # return 1 00:04:30.544 08:58:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:30.544 1+0 records in 00:04:30.544 1+0 records out 00:04:30.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498835 s, 210 MB/s 00:04:30.544 08:58:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.544 08:58:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:30.544 08:58:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:30.544 08:58:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:30.544 08:58:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:30.544 No valid GPT data, bailing 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:30.544 08:58:08 -- scripts/common.sh@394 -- # pt= 00:04:30.544 08:58:08 -- scripts/common.sh@395 -- # return 1 00:04:30.544 08:58:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:30.544 1+0 records in 00:04:30.544 1+0 records out 00:04:30.544 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477715 s, 219 MB/s 00:04:30.544 08:58:08 -- spdk/autotest.sh@105 -- # sync 00:04:30.544 08:58:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:30.544 08:58:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:30.544 08:58:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:31.919 08:58:10 -- spdk/autotest.sh@111 -- # uname -s 00:04:31.919 08:58:10 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:31.919 08:58:10 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:31.919 08:58:10 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:32.489 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.489 Hugepages 00:04:32.489 node hugesize free / total 00:04:32.489 node0 1048576kB 0 / 0 00:04:32.489 node0 2048kB 0 / 0 00:04:32.489 00:04:32.489 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:32.748 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:32.748 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:32.748 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:32.748 08:58:11 -- spdk/autotest.sh@117 -- # uname -s 00:04:32.748 08:58:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:32.748 08:58:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:32.748 08:58:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.315 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.573 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.573 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.573 08:58:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:34.946 08:58:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:34.946 08:58:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:34.946 08:58:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:34.946 08:58:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:34.946 08:58:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:34.946 08:58:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:34.947 08:58:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:34.947 08:58:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:34.947 08:58:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:34.947 08:58:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:34.947 08:58:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:34.947 08:58:13 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:34.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.947 Waiting for block devices as requested 00:04:35.205 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:35.205 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:35.205 08:58:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:35.205 08:58:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:35.205 08:58:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:35.205 08:58:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:35.205 08:58:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:35.205 08:58:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:35.205 08:58:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:35.205 08:58:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:35.205 08:58:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:35.205 08:58:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:35.205 08:58:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:35.205 08:58:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:35.205 08:58:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:35.205 08:58:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:35.205 08:58:14 -- common/autotest_common.sh@1543 -- # continue 00:04:35.205 08:58:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:35.205 08:58:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:35.205 08:58:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:35.205 08:58:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:35.205 08:58:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:35.205 08:58:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:35.206 08:58:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:35.206 08:58:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:35.206 08:58:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:35.206 08:58:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:35.206 08:58:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:35.206 08:58:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:35.206 08:58:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:35.206 08:58:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:35.206 08:58:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:35.206 08:58:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:35.206 08:58:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:35.206 08:58:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:35.206 08:58:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:35.206 08:58:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:35.206 08:58:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:35.206 08:58:14 -- common/autotest_common.sh@1543 -- # continue 00:04:35.206 08:58:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:35.206 08:58:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.206 08:58:14 -- common/autotest_common.sh@10 -- # set +x 00:04:35.464 08:58:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:35.464 08:58:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.464 08:58:14 -- common/autotest_common.sh@10 -- # set +x 00:04:35.464 08:58:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.030 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.289 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:36.289 08:58:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:36.289 08:58:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.289 08:58:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.289 08:58:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:36.289 08:58:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:36.289 08:58:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:36.289 08:58:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:36.289 08:58:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:36.289 08:58:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:36.289 08:58:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:36.289 08:58:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:36.289 08:58:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:36.289 08:58:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:36.289 08:58:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:36.289 08:58:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:36.289 08:58:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:36.289 08:58:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:36.289 08:58:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:36.289 08:58:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:36.289 08:58:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:36.289 08:58:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:36.289 08:58:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.289 08:58:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:36.289 08:58:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:36.289 08:58:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:36.289 08:58:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:36.289 08:58:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:36.289 08:58:15 -- common/autotest_common.sh@1572 -- # return 0 00:04:36.289 08:58:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:36.289 08:58:15 -- common/autotest_common.sh@1580 -- # return 0 00:04:36.289 08:58:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:36.289 08:58:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:36.289 08:58:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.289 08:58:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:36.289 08:58:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:36.289 08:58:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.289 08:58:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.289 08:58:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:36.289 08:58:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:36.289 08:58:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.289 08:58:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.289 08:58:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.289 ************************************ 00:04:36.289 START TEST env 00:04:36.289 ************************************ 00:04:36.289 08:58:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:36.548 * Looking for test storage... 00:04:36.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.548 08:58:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.548 08:58:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.548 08:58:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.548 08:58:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.548 08:58:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.548 08:58:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.548 08:58:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.548 08:58:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.548 08:58:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.548 08:58:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.548 08:58:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.548 08:58:15 env -- scripts/common.sh@344 -- # case "$op" in 00:04:36.548 08:58:15 env -- scripts/common.sh@345 -- # : 1 00:04:36.548 08:58:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.548 08:58:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.548 08:58:15 env -- scripts/common.sh@365 -- # decimal 1 00:04:36.548 08:58:15 env -- scripts/common.sh@353 -- # local d=1 00:04:36.548 08:58:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.548 08:58:15 env -- scripts/common.sh@355 -- # echo 1 00:04:36.548 08:58:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.548 08:58:15 env -- scripts/common.sh@366 -- # decimal 2 00:04:36.548 08:58:15 env -- scripts/common.sh@353 -- # local d=2 00:04:36.548 08:58:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.548 08:58:15 env -- scripts/common.sh@355 -- # echo 2 00:04:36.548 08:58:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.548 08:58:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.548 08:58:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.548 08:58:15 env -- scripts/common.sh@368 -- # return 0 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.548 --rc genhtml_branch_coverage=1 00:04:36.548 --rc genhtml_function_coverage=1 00:04:36.548 --rc genhtml_legend=1 00:04:36.548 --rc geninfo_all_blocks=1 00:04:36.548 --rc geninfo_unexecuted_blocks=1 00:04:36.548 00:04:36.548 ' 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.548 --rc genhtml_branch_coverage=1 00:04:36.548 --rc genhtml_function_coverage=1 00:04:36.548 --rc genhtml_legend=1 00:04:36.548 --rc geninfo_all_blocks=1 00:04:36.548 --rc geninfo_unexecuted_blocks=1 00:04:36.548 00:04:36.548 ' 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.548 --rc genhtml_branch_coverage=1 00:04:36.548 --rc genhtml_function_coverage=1 00:04:36.548 --rc genhtml_legend=1 00:04:36.548 --rc geninfo_all_blocks=1 00:04:36.548 --rc geninfo_unexecuted_blocks=1 00:04:36.548 00:04:36.548 ' 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.548 --rc genhtml_branch_coverage=1 00:04:36.548 --rc genhtml_function_coverage=1 00:04:36.548 --rc genhtml_legend=1 00:04:36.548 --rc geninfo_all_blocks=1 00:04:36.548 --rc geninfo_unexecuted_blocks=1 00:04:36.548 00:04:36.548 ' 00:04:36.548 08:58:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.548 08:58:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.548 08:58:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.548 ************************************ 00:04:36.548 START TEST env_memory 00:04:36.548 ************************************ 00:04:36.548 08:58:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:36.548 00:04:36.549 00:04:36.549 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.549 http://cunit.sourceforge.net/ 00:04:36.549 00:04:36.549 00:04:36.549 Suite: memory 00:04:36.549 Test: alloc and free memory map ...[2024-11-20 08:58:15.433900] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:36.549 passed 00:04:36.807 Test: mem map translation ...[2024-11-20 08:58:15.471338] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:36.807 [2024-11-20 08:58:15.471456] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:36.807 [2024-11-20 08:58:15.471543] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:36.807 [2024-11-20 08:58:15.471566] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:36.807 passed 00:04:36.807 Test: mem map registration ...[2024-11-20 08:58:15.544314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:36.807 [2024-11-20 08:58:15.544388] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:36.807 passed 00:04:36.807 Test: mem map adjacent registrations ...passed 00:04:36.807 00:04:36.807 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.807 suites 1 1 n/a 0 0 00:04:36.807 tests 4 4 4 0 0 00:04:36.807 asserts 152 152 152 0 n/a 00:04:36.807 00:04:36.807 Elapsed time = 0.236 seconds 00:04:36.807 00:04:36.807 real 0m0.258s 00:04:36.807 user 0m0.238s 00:04:36.807 sys 0m0.015s 00:04:36.807 08:58:15 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.807 08:58:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:36.807 ************************************ 00:04:36.807 END TEST env_memory 00:04:36.807 ************************************ 00:04:36.807 08:58:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.807 08:58:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.807 08:58:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.807 08:58:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.807 ************************************ 00:04:36.807 START TEST env_vtophys 00:04:36.807 ************************************ 00:04:36.807 08:58:15 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:36.807 EAL: lib.eal log level changed from notice to debug 00:04:36.807 EAL: Detected lcore 0 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 1 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 2 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 3 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 4 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 5 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 6 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 7 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 8 as core 0 on socket 0 00:04:36.807 EAL: Detected lcore 9 as core 0 on socket 0 00:04:37.079 EAL: Maximum logical cores by configuration: 128 00:04:37.079 EAL: Detected CPU lcores: 10 00:04:37.079 EAL: Detected NUMA nodes: 1 00:04:37.079 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:37.079 EAL: Detected shared linkage of DPDK 00:04:37.079 EAL: No shared files mode enabled, IPC will be disabled 00:04:37.079 EAL: Selected IOVA mode 'PA' 00:04:37.079 EAL: Probing VFIO support... 00:04:37.079 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:37.079 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:37.079 EAL: Ask a virtual area of 0x2e000 bytes 00:04:37.079 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:37.079 EAL: Setting up physically contiguous memory... 00:04:37.079 EAL: Setting maximum number of open files to 524288 00:04:37.079 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:37.079 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:37.079 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.079 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:37.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.079 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.079 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:37.079 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:37.079 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.079 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:37.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.079 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.079 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:37.079 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:37.079 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.079 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:37.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.079 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.079 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:37.079 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:37.079 EAL: Ask a virtual area of 0x61000 bytes 00:04:37.079 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:37.079 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:37.079 EAL: Ask a virtual area of 0x400000000 bytes 00:04:37.079 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:37.079 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:37.079 EAL: Hugepages will be freed exactly as allocated. 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: TSC frequency is ~2200000 KHz 00:04:37.079 EAL: Main lcore 0 is ready (tid=7f24b601fa00;cpuset=[0]) 00:04:37.079 EAL: Trying to obtain current memory policy. 00:04:37.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.079 EAL: Restoring previous memory policy: 0 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was expanded by 2MB 00:04:37.079 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:37.079 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:37.079 EAL: Mem event callback 'spdk:(nil)' registered 00:04:37.079 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:37.079 00:04:37.079 00:04:37.079 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.079 http://cunit.sourceforge.net/ 00:04:37.079 00:04:37.079 00:04:37.079 Suite: components_suite 00:04:37.079 Test: vtophys_malloc_test ...passed 00:04:37.079 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:37.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.079 EAL: Restoring previous memory policy: 4 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was expanded by 4MB 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was shrunk by 4MB 00:04:37.079 EAL: Trying to obtain current memory policy. 00:04:37.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.079 EAL: Restoring previous memory policy: 4 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was expanded by 6MB 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was shrunk by 6MB 00:04:37.079 EAL: Trying to obtain current memory policy. 00:04:37.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.079 EAL: Restoring previous memory policy: 4 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was expanded by 10MB 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was shrunk by 10MB 00:04:37.079 EAL: Trying to obtain current memory policy. 00:04:37.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.079 EAL: Restoring previous memory policy: 4 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was expanded by 18MB 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was shrunk by 18MB 00:04:37.079 EAL: Trying to obtain current memory policy. 00:04:37.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.079 EAL: Restoring previous memory policy: 4 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was expanded by 34MB 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was shrunk by 34MB 00:04:37.079 EAL: Trying to obtain current memory policy. 00:04:37.079 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.079 EAL: Restoring previous memory policy: 4 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was expanded by 66MB 00:04:37.079 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.079 EAL: request: mp_malloc_sync 00:04:37.079 EAL: No shared files mode enabled, IPC is disabled 00:04:37.079 EAL: Heap on socket 0 was shrunk by 66MB 00:04:37.079 EAL: Trying to obtain current memory policy. 00:04:37.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.080 EAL: Restoring previous memory policy: 4 00:04:37.080 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.080 EAL: request: mp_malloc_sync 00:04:37.080 EAL: No shared files mode enabled, IPC is disabled 00:04:37.080 EAL: Heap on socket 0 was expanded by 130MB 00:04:37.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.350 EAL: request: mp_malloc_sync 00:04:37.350 EAL: No shared files mode enabled, IPC is disabled 00:04:37.350 EAL: Heap on socket 0 was shrunk by 130MB 00:04:37.350 EAL: Trying to obtain current memory policy. 00:04:37.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.350 EAL: Restoring previous memory policy: 4 00:04:37.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.350 EAL: request: mp_malloc_sync 00:04:37.350 EAL: No shared files mode enabled, IPC is disabled 00:04:37.350 EAL: Heap on socket 0 was expanded by 258MB 00:04:37.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.350 EAL: request: mp_malloc_sync 00:04:37.350 EAL: No shared files mode enabled, IPC is disabled 00:04:37.350 EAL: Heap on socket 0 was shrunk by 258MB 00:04:37.350 EAL: Trying to obtain current memory policy. 00:04:37.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.608 EAL: Restoring previous memory policy: 4 00:04:37.608 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.608 EAL: request: mp_malloc_sync 00:04:37.608 EAL: No shared files mode enabled, IPC is disabled 00:04:37.608 EAL: Heap on socket 0 was expanded by 514MB 00:04:37.608 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.867 EAL: request: mp_malloc_sync 00:04:37.867 EAL: No shared files mode enabled, IPC is disabled 00:04:37.867 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.867 EAL: Trying to obtain current memory policy. 00:04:37.867 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.126 EAL: Restoring previous memory policy: 4 00:04:38.126 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.126 EAL: request: mp_malloc_sync 00:04:38.126 EAL: No shared files mode enabled, IPC is disabled 00:04:38.126 EAL: Heap on socket 0 was expanded by 1026MB 00:04:38.126 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.385 passed 00:04:38.385 00:04:38.385 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.385 suites 1 1 n/a 0 0 00:04:38.385 tests 2 2 2 0 0 00:04:38.385 asserts 5400 5400 5400 0 n/a 00:04:38.385 00:04:38.385 Elapsed time = 1.323 seconds 00:04:38.385 EAL: request: mp_malloc_sync 00:04:38.385 EAL: No shared files mode enabled, IPC is disabled 00:04:38.385 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.385 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.385 EAL: request: mp_malloc_sync 00:04:38.385 EAL: No shared files mode enabled, IPC is disabled 00:04:38.385 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.385 EAL: No shared files mode enabled, IPC is disabled 00:04:38.385 EAL: No shared files mode enabled, IPC is disabled 00:04:38.385 EAL: No shared files mode enabled, IPC is disabled 00:04:38.385 00:04:38.385 real 0m1.546s 00:04:38.385 user 0m0.842s 00:04:38.385 sys 0m0.566s 00:04:38.385 08:58:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.385 08:58:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.385 ************************************ 00:04:38.385 END TEST env_vtophys 00:04:38.386 ************************************ 00:04:38.386 08:58:17 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.386 08:58:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.386 08:58:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.386 08:58:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.386 ************************************ 00:04:38.386 START TEST env_pci 00:04:38.386 ************************************ 00:04:38.386 08:58:17 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.645 00:04:38.645 00:04:38.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.645 http://cunit.sourceforge.net/ 00:04:38.645 00:04:38.645 00:04:38.645 Suite: pci 00:04:38.645 Test: pci_hook ...[2024-11-20 08:58:17.306936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58563 has claimed it 00:04:38.645 passed 00:04:38.645 00:04:38.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.645 suites 1 1 n/a 0 0 00:04:38.645 tests 1 1 1 0 0 00:04:38.645 asserts 25 25 25 0 n/a 00:04:38.645 00:04:38.645 Elapsed time = 0.002 seconds 00:04:38.645 EAL: Cannot find device (10000:00:01.0) 00:04:38.645 EAL: Failed to attach device on primary process 00:04:38.645 00:04:38.645 real 0m0.024s 00:04:38.645 user 0m0.011s 00:04:38.645 sys 0m0.012s 00:04:38.645 08:58:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.645 08:58:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 END TEST env_pci 00:04:38.645 ************************************ 00:04:38.645 08:58:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.645 08:58:17 env -- env/env.sh@15 -- # uname 00:04:38.645 08:58:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.645 08:58:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.645 08:58:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.645 08:58:17 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:38.645 08:58:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.645 08:58:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 START TEST env_dpdk_post_init 00:04:38.645 ************************************ 00:04:38.645 08:58:17 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.645 EAL: Detected CPU lcores: 10 00:04:38.645 EAL: Detected NUMA nodes: 1 00:04:38.645 EAL: Detected shared linkage of DPDK 00:04:38.645 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.645 EAL: Selected IOVA mode 'PA' 00:04:38.645 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.645 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:38.645 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:38.645 Starting DPDK initialization... 00:04:38.645 Starting SPDK post initialization... 00:04:38.645 SPDK NVMe probe 00:04:38.645 Attaching to 0000:00:10.0 00:04:38.645 Attaching to 0000:00:11.0 00:04:38.645 Attached to 0000:00:10.0 00:04:38.645 Attached to 0000:00:11.0 00:04:38.645 Cleaning up... 00:04:38.645 00:04:38.645 real 0m0.182s 00:04:38.645 user 0m0.051s 00:04:38.645 sys 0m0.031s 00:04:38.645 08:58:17 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.645 08:58:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:38.645 ************************************ 00:04:38.645 END TEST env_dpdk_post_init 00:04:38.645 ************************************ 00:04:38.905 08:58:17 env -- env/env.sh@26 -- # uname 00:04:38.905 08:58:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:38.905 08:58:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.905 08:58:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.905 08:58:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.905 08:58:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.905 ************************************ 00:04:38.905 START TEST env_mem_callbacks 00:04:38.905 ************************************ 00:04:38.905 08:58:17 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:38.905 EAL: Detected CPU lcores: 10 00:04:38.905 EAL: Detected NUMA nodes: 1 00:04:38.905 EAL: Detected shared linkage of DPDK 00:04:38.905 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:38.905 EAL: Selected IOVA mode 'PA' 00:04:38.905 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:38.905 00:04:38.905 00:04:38.905 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.905 http://cunit.sourceforge.net/ 00:04:38.905 00:04:38.905 00:04:38.905 Suite: memory 00:04:38.905 Test: test ... 00:04:38.905 register 0x200000200000 2097152 00:04:38.905 malloc 3145728 00:04:38.905 register 0x200000400000 4194304 00:04:38.905 buf 0x200000500000 len 3145728 PASSED 00:04:38.905 malloc 64 00:04:38.905 buf 0x2000004fff40 len 64 PASSED 00:04:38.905 malloc 4194304 00:04:38.905 register 0x200000800000 6291456 00:04:38.905 buf 0x200000a00000 len 4194304 PASSED 00:04:38.905 free 0x200000500000 3145728 00:04:38.905 free 0x2000004fff40 64 00:04:38.905 unregister 0x200000400000 4194304 PASSED 00:04:38.905 free 0x200000a00000 4194304 00:04:38.905 unregister 0x200000800000 6291456 PASSED 00:04:38.905 malloc 8388608 00:04:38.905 register 0x200000400000 10485760 00:04:38.905 buf 0x200000600000 len 8388608 PASSED 00:04:38.905 free 0x200000600000 8388608 00:04:38.905 unregister 0x200000400000 10485760 PASSED 00:04:38.905 passed 00:04:38.905 00:04:38.905 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.905 suites 1 1 n/a 0 0 00:04:38.905 tests 1 1 1 0 0 00:04:38.905 asserts 15 15 15 0 n/a 00:04:38.905 00:04:38.905 Elapsed time = 0.008 seconds 00:04:38.905 00:04:38.905 real 0m0.136s 00:04:38.905 user 0m0.015s 00:04:38.905 sys 0m0.019s 00:04:38.905 08:58:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.905 08:58:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:38.905 ************************************ 00:04:38.905 END TEST env_mem_callbacks 00:04:38.905 ************************************ 00:04:38.905 00:04:38.905 real 0m2.584s 00:04:38.905 user 0m1.356s 00:04:38.905 sys 0m0.864s 00:04:38.905 08:58:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.905 08:58:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.905 ************************************ 00:04:38.905 END TEST env 00:04:38.905 ************************************ 00:04:38.905 08:58:17 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:38.905 08:58:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.905 08:58:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.905 08:58:17 -- common/autotest_common.sh@10 -- # set +x 00:04:39.164 ************************************ 00:04:39.164 START TEST rpc 00:04:39.164 ************************************ 00:04:39.164 08:58:17 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.164 * Looking for test storage... 00:04:39.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.164 08:58:17 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.164 08:58:17 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.164 08:58:17 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.164 08:58:17 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.164 08:58:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.164 08:58:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.164 08:58:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.164 08:58:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.164 08:58:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.164 08:58:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.164 08:58:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.164 08:58:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.164 08:58:17 rpc -- scripts/common.sh@345 -- # : 1 00:04:39.164 08:58:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.164 08:58:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.164 08:58:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.164 08:58:17 rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.164 08:58:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.164 08:58:17 rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.164 08:58:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.164 08:58:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.164 08:58:17 rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.164 08:58:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.164 08:58:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.164 08:58:17 rpc -- scripts/common.sh@368 -- # return 0 00:04:39.164 08:58:17 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.164 08:58:17 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.164 --rc genhtml_branch_coverage=1 00:04:39.165 --rc genhtml_function_coverage=1 00:04:39.165 --rc genhtml_legend=1 00:04:39.165 --rc geninfo_all_blocks=1 00:04:39.165 --rc geninfo_unexecuted_blocks=1 00:04:39.165 00:04:39.165 ' 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.165 --rc genhtml_branch_coverage=1 00:04:39.165 --rc genhtml_function_coverage=1 00:04:39.165 --rc genhtml_legend=1 00:04:39.165 --rc geninfo_all_blocks=1 00:04:39.165 --rc geninfo_unexecuted_blocks=1 00:04:39.165 00:04:39.165 ' 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.165 --rc genhtml_branch_coverage=1 00:04:39.165 --rc genhtml_function_coverage=1 00:04:39.165 --rc genhtml_legend=1 00:04:39.165 --rc geninfo_all_blocks=1 00:04:39.165 --rc geninfo_unexecuted_blocks=1 00:04:39.165 00:04:39.165 ' 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.165 --rc genhtml_branch_coverage=1 00:04:39.165 --rc genhtml_function_coverage=1 00:04:39.165 --rc genhtml_legend=1 00:04:39.165 --rc geninfo_all_blocks=1 00:04:39.165 --rc geninfo_unexecuted_blocks=1 00:04:39.165 00:04:39.165 ' 00:04:39.165 08:58:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58680 00:04:39.165 08:58:17 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:39.165 08:58:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.165 08:58:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58680 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 58680 ']' 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.165 08:58:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.165 [2024-11-20 08:58:18.067642] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:39.165 [2024-11-20 08:58:18.067752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58680 ] 00:04:39.423 [2024-11-20 08:58:18.220476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.423 [2024-11-20 08:58:18.291514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:39.423 [2024-11-20 08:58:18.291574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58680' to capture a snapshot of events at runtime. 00:04:39.423 [2024-11-20 08:58:18.291587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:39.423 [2024-11-20 08:58:18.291598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:39.423 [2024-11-20 08:58:18.291607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58680 for offline analysis/debug. 00:04:39.423 [2024-11-20 08:58:18.292157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.359 08:58:19 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.359 08:58:19 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.359 08:58:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.359 08:58:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.359 08:58:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:40.359 08:58:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:40.359 08:58:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.359 08:58:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.359 08:58:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.359 ************************************ 00:04:40.359 START TEST rpc_integrity 00:04:40.359 ************************************ 00:04:40.359 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:40.359 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:40.359 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.359 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.359 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.359 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:40.359 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:40.359 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:40.360 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.360 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:40.360 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.360 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.360 { 00:04:40.360 "aliases": [ 00:04:40.360 "42387bb4-4fd4-4d60-ad05-6aa4f9ff9d82" 00:04:40.360 ], 00:04:40.360 "assigned_rate_limits": { 00:04:40.360 "r_mbytes_per_sec": 0, 00:04:40.360 "rw_ios_per_sec": 0, 00:04:40.360 "rw_mbytes_per_sec": 0, 00:04:40.360 "w_mbytes_per_sec": 0 00:04:40.360 }, 00:04:40.360 "block_size": 512, 00:04:40.360 "claimed": false, 00:04:40.360 "driver_specific": {}, 00:04:40.360 "memory_domains": [ 00:04:40.360 { 00:04:40.360 "dma_device_id": "system", 00:04:40.360 "dma_device_type": 1 00:04:40.360 }, 00:04:40.360 { 00:04:40.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.360 "dma_device_type": 2 00:04:40.360 } 00:04:40.360 ], 00:04:40.360 "name": "Malloc0", 00:04:40.360 "num_blocks": 16384, 00:04:40.360 "product_name": "Malloc disk", 00:04:40.360 "supported_io_types": { 00:04:40.360 "abort": true, 00:04:40.360 "compare": false, 00:04:40.360 "compare_and_write": false, 00:04:40.360 "copy": true, 00:04:40.360 "flush": true, 00:04:40.360 "get_zone_info": false, 00:04:40.360 "nvme_admin": false, 00:04:40.360 "nvme_io": false, 00:04:40.360 "nvme_io_md": false, 00:04:40.360 "nvme_iov_md": false, 00:04:40.360 "read": true, 00:04:40.360 "reset": true, 00:04:40.360 "seek_data": false, 00:04:40.360 "seek_hole": false, 00:04:40.360 "unmap": true, 00:04:40.360 "write": true, 00:04:40.360 "write_zeroes": true, 00:04:40.360 "zcopy": true, 00:04:40.360 "zone_append": false, 00:04:40.360 "zone_management": false 00:04:40.360 }, 00:04:40.360 "uuid": "42387bb4-4fd4-4d60-ad05-6aa4f9ff9d82", 00:04:40.360 "zoned": false 00:04:40.360 } 00:04:40.360 ]' 00:04:40.360 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.360 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.360 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.360 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.360 [2024-11-20 08:58:19.272134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:40.360 [2024-11-20 08:58:19.272193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.360 [2024-11-20 08:58:19.272216] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x106dba0 00:04:40.360 [2024-11-20 08:58:19.272226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.360 [2024-11-20 08:58:19.273955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.360 [2024-11-20 08:58:19.273994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.620 Passthru0 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.620 { 00:04:40.620 "aliases": [ 00:04:40.620 "42387bb4-4fd4-4d60-ad05-6aa4f9ff9d82" 00:04:40.620 ], 00:04:40.620 "assigned_rate_limits": { 00:04:40.620 "r_mbytes_per_sec": 0, 00:04:40.620 "rw_ios_per_sec": 0, 00:04:40.620 "rw_mbytes_per_sec": 0, 00:04:40.620 "w_mbytes_per_sec": 0 00:04:40.620 }, 00:04:40.620 "block_size": 512, 00:04:40.620 "claim_type": "exclusive_write", 00:04:40.620 "claimed": true, 00:04:40.620 "driver_specific": {}, 00:04:40.620 "memory_domains": [ 00:04:40.620 { 00:04:40.620 "dma_device_id": "system", 00:04:40.620 "dma_device_type": 1 00:04:40.620 }, 00:04:40.620 { 00:04:40.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.620 "dma_device_type": 2 00:04:40.620 } 00:04:40.620 ], 00:04:40.620 "name": "Malloc0", 00:04:40.620 "num_blocks": 16384, 00:04:40.620 "product_name": "Malloc disk", 00:04:40.620 "supported_io_types": { 00:04:40.620 "abort": true, 00:04:40.620 "compare": false, 00:04:40.620 "compare_and_write": false, 00:04:40.620 "copy": true, 00:04:40.620 "flush": true, 00:04:40.620 "get_zone_info": false, 00:04:40.620 "nvme_admin": false, 00:04:40.620 "nvme_io": false, 00:04:40.620 "nvme_io_md": false, 00:04:40.620 "nvme_iov_md": false, 00:04:40.620 "read": true, 00:04:40.620 "reset": true, 00:04:40.620 "seek_data": false, 00:04:40.620 "seek_hole": false, 00:04:40.620 "unmap": true, 00:04:40.620 "write": true, 00:04:40.620 "write_zeroes": true, 00:04:40.620 "zcopy": true, 00:04:40.620 "zone_append": false, 00:04:40.620 "zone_management": false 00:04:40.620 }, 00:04:40.620 "uuid": "42387bb4-4fd4-4d60-ad05-6aa4f9ff9d82", 00:04:40.620 "zoned": false 00:04:40.620 }, 00:04:40.620 { 00:04:40.620 "aliases": [ 00:04:40.620 "b0cbbe77-232c-502d-8115-a4dd800f4e2c" 00:04:40.620 ], 00:04:40.620 "assigned_rate_limits": { 00:04:40.620 "r_mbytes_per_sec": 0, 00:04:40.620 "rw_ios_per_sec": 0, 00:04:40.620 "rw_mbytes_per_sec": 0, 00:04:40.620 "w_mbytes_per_sec": 0 00:04:40.620 }, 00:04:40.620 "block_size": 512, 00:04:40.620 "claimed": false, 00:04:40.620 "driver_specific": { 00:04:40.620 "passthru": { 00:04:40.620 "base_bdev_name": "Malloc0", 00:04:40.620 "name": "Passthru0" 00:04:40.620 } 00:04:40.620 }, 00:04:40.620 "memory_domains": [ 00:04:40.620 { 00:04:40.620 "dma_device_id": "system", 00:04:40.620 "dma_device_type": 1 00:04:40.620 }, 00:04:40.620 { 00:04:40.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.620 "dma_device_type": 2 00:04:40.620 } 00:04:40.620 ], 00:04:40.620 "name": "Passthru0", 00:04:40.620 "num_blocks": 16384, 00:04:40.620 "product_name": "passthru", 00:04:40.620 "supported_io_types": { 00:04:40.620 "abort": true, 00:04:40.620 "compare": false, 00:04:40.620 "compare_and_write": false, 00:04:40.620 "copy": true, 00:04:40.620 "flush": true, 00:04:40.620 "get_zone_info": false, 00:04:40.620 "nvme_admin": false, 00:04:40.620 "nvme_io": false, 00:04:40.620 "nvme_io_md": false, 00:04:40.620 "nvme_iov_md": false, 00:04:40.620 "read": true, 00:04:40.620 "reset": true, 00:04:40.620 "seek_data": false, 00:04:40.620 "seek_hole": false, 00:04:40.620 "unmap": true, 00:04:40.620 "write": true, 00:04:40.620 "write_zeroes": true, 00:04:40.620 "zcopy": true, 00:04:40.620 "zone_append": false, 00:04:40.620 "zone_management": false 00:04:40.620 }, 00:04:40.620 "uuid": "b0cbbe77-232c-502d-8115-a4dd800f4e2c", 00:04:40.620 "zoned": false 00:04:40.620 } 00:04:40.620 ]' 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.620 08:58:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.620 00:04:40.620 real 0m0.330s 00:04:40.620 user 0m0.219s 00:04:40.620 sys 0m0.034s 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.620 ************************************ 00:04:40.620 END TEST rpc_integrity 00:04:40.620 ************************************ 00:04:40.620 08:58:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.620 08:58:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:40.620 08:58:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.621 08:58:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.621 08:58:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.621 ************************************ 00:04:40.621 START TEST rpc_plugins 00:04:40.621 ************************************ 00:04:40.621 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:40.621 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:40.621 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.621 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.621 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.621 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:40.621 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:40.621 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.621 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.621 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.621 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:40.621 { 00:04:40.621 "aliases": [ 00:04:40.621 "8e360231-e905-4f31-8548-58290dd6de50" 00:04:40.621 ], 00:04:40.621 "assigned_rate_limits": { 00:04:40.621 "r_mbytes_per_sec": 0, 00:04:40.621 "rw_ios_per_sec": 0, 00:04:40.621 "rw_mbytes_per_sec": 0, 00:04:40.621 "w_mbytes_per_sec": 0 00:04:40.621 }, 00:04:40.621 "block_size": 4096, 00:04:40.621 "claimed": false, 00:04:40.621 "driver_specific": {}, 00:04:40.621 "memory_domains": [ 00:04:40.621 { 00:04:40.621 "dma_device_id": "system", 00:04:40.621 "dma_device_type": 1 00:04:40.621 }, 00:04:40.621 { 00:04:40.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.621 "dma_device_type": 2 00:04:40.621 } 00:04:40.621 ], 00:04:40.621 "name": "Malloc1", 00:04:40.621 "num_blocks": 256, 00:04:40.621 "product_name": "Malloc disk", 00:04:40.621 "supported_io_types": { 00:04:40.621 "abort": true, 00:04:40.621 "compare": false, 00:04:40.621 "compare_and_write": false, 00:04:40.621 "copy": true, 00:04:40.621 "flush": true, 00:04:40.621 "get_zone_info": false, 00:04:40.621 "nvme_admin": false, 00:04:40.621 "nvme_io": false, 00:04:40.621 "nvme_io_md": false, 00:04:40.621 "nvme_iov_md": false, 00:04:40.621 "read": true, 00:04:40.621 "reset": true, 00:04:40.621 "seek_data": false, 00:04:40.621 "seek_hole": false, 00:04:40.621 "unmap": true, 00:04:40.621 "write": true, 00:04:40.621 "write_zeroes": true, 00:04:40.621 "zcopy": true, 00:04:40.621 "zone_append": false, 00:04:40.621 "zone_management": false 00:04:40.621 }, 00:04:40.621 "uuid": "8e360231-e905-4f31-8548-58290dd6de50", 00:04:40.621 "zoned": false 00:04:40.621 } 00:04:40.621 ]' 00:04:40.621 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:40.880 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:40.880 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.880 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.880 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:40.880 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:40.880 08:58:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:40.880 00:04:40.880 real 0m0.166s 00:04:40.880 user 0m0.104s 00:04:40.880 sys 0m0.013s 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.880 08:58:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:40.880 ************************************ 00:04:40.880 END TEST rpc_plugins 00:04:40.880 ************************************ 00:04:40.880 08:58:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:40.880 08:58:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.880 08:58:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.880 08:58:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.880 ************************************ 00:04:40.880 START TEST rpc_trace_cmd_test 00:04:40.880 ************************************ 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:40.880 "bdev": { 00:04:40.880 "mask": "0x8", 00:04:40.880 "tpoint_mask": "0xffffffffffffffff" 00:04:40.880 }, 00:04:40.880 "bdev_nvme": { 00:04:40.880 "mask": "0x4000", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "bdev_raid": { 00:04:40.880 "mask": "0x20000", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "blob": { 00:04:40.880 "mask": "0x10000", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "blobfs": { 00:04:40.880 "mask": "0x80", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "dsa": { 00:04:40.880 "mask": "0x200", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "ftl": { 00:04:40.880 "mask": "0x40", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "iaa": { 00:04:40.880 "mask": "0x1000", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "iscsi_conn": { 00:04:40.880 "mask": "0x2", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "nvme_pcie": { 00:04:40.880 "mask": "0x800", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "nvme_tcp": { 00:04:40.880 "mask": "0x2000", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "nvmf_rdma": { 00:04:40.880 "mask": "0x10", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "nvmf_tcp": { 00:04:40.880 "mask": "0x20", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "scheduler": { 00:04:40.880 "mask": "0x40000", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "scsi": { 00:04:40.880 "mask": "0x4", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "sock": { 00:04:40.880 "mask": "0x8000", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "thread": { 00:04:40.880 "mask": "0x400", 00:04:40.880 "tpoint_mask": "0x0" 00:04:40.880 }, 00:04:40.880 "tpoint_group_mask": "0x8", 00:04:40.880 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58680" 00:04:40.880 }' 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:40.880 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.139 00:04:41.139 real 0m0.267s 00:04:41.139 user 0m0.230s 00:04:41.139 sys 0m0.027s 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.139 08:58:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.139 ************************************ 00:04:41.139 END TEST rpc_trace_cmd_test 00:04:41.139 ************************************ 00:04:41.139 08:58:20 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:41.139 08:58:20 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:41.139 08:58:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.139 08:58:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.139 08:58:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.139 ************************************ 00:04:41.139 START TEST go_rpc 00:04:41.139 ************************************ 00:04:41.139 08:58:20 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:04:41.139 08:58:20 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:41.139 08:58:20 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:41.139 08:58:20 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["54e99875-f3c8-4c9a-a7bd-3b55c622c82c"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"54e99875-f3c8-4c9a-a7bd-3b55c622c82c","zoned":false}]' 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:41.398 08:58:20 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:41.398 00:04:41.398 real 0m0.235s 00:04:41.398 user 0m0.161s 00:04:41.398 sys 0m0.033s 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.398 08:58:20 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.398 ************************************ 00:04:41.398 END TEST go_rpc 00:04:41.398 ************************************ 00:04:41.398 08:58:20 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.398 08:58:20 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.398 08:58:20 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.398 08:58:20 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.398 08:58:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.398 ************************************ 00:04:41.398 START TEST rpc_daemon_integrity 00:04:41.398 ************************************ 00:04:41.398 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:41.398 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.398 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.398 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.657 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.657 { 00:04:41.657 "aliases": [ 00:04:41.657 "0f9bc1f6-cb99-4e22-9c1f-ad3e5c97df34" 00:04:41.657 ], 00:04:41.657 "assigned_rate_limits": { 00:04:41.657 "r_mbytes_per_sec": 0, 00:04:41.657 "rw_ios_per_sec": 0, 00:04:41.657 "rw_mbytes_per_sec": 0, 00:04:41.657 "w_mbytes_per_sec": 0 00:04:41.657 }, 00:04:41.657 "block_size": 512, 00:04:41.657 "claimed": false, 00:04:41.657 "driver_specific": {}, 00:04:41.657 "memory_domains": [ 00:04:41.657 { 00:04:41.657 "dma_device_id": "system", 00:04:41.657 "dma_device_type": 1 00:04:41.657 }, 00:04:41.657 { 00:04:41.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.657 "dma_device_type": 2 00:04:41.657 } 00:04:41.657 ], 00:04:41.657 "name": "Malloc3", 00:04:41.657 "num_blocks": 16384, 00:04:41.657 "product_name": "Malloc disk", 00:04:41.657 "supported_io_types": { 00:04:41.657 "abort": true, 00:04:41.657 "compare": false, 00:04:41.657 "compare_and_write": false, 00:04:41.657 "copy": true, 00:04:41.657 "flush": true, 00:04:41.657 "get_zone_info": false, 00:04:41.657 "nvme_admin": false, 00:04:41.657 "nvme_io": false, 00:04:41.657 "nvme_io_md": false, 00:04:41.657 "nvme_iov_md": false, 00:04:41.657 "read": true, 00:04:41.657 "reset": true, 00:04:41.657 "seek_data": false, 00:04:41.657 "seek_hole": false, 00:04:41.657 "unmap": true, 00:04:41.658 "write": true, 00:04:41.658 "write_zeroes": true, 00:04:41.658 "zcopy": true, 00:04:41.658 "zone_append": false, 00:04:41.658 "zone_management": false 00:04:41.658 }, 00:04:41.658 "uuid": "0f9bc1f6-cb99-4e22-9c1f-ad3e5c97df34", 00:04:41.658 "zoned": false 00:04:41.658 } 00:04:41.658 ]' 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.658 [2024-11-20 08:58:20.457942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:41.658 [2024-11-20 08:58:20.457997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.658 [2024-11-20 08:58:20.458021] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf2c420 00:04:41.658 [2024-11-20 08:58:20.458032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.658 [2024-11-20 08:58:20.459729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.658 [2024-11-20 08:58:20.459779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.658 Passthru0 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.658 { 00:04:41.658 "aliases": [ 00:04:41.658 "0f9bc1f6-cb99-4e22-9c1f-ad3e5c97df34" 00:04:41.658 ], 00:04:41.658 "assigned_rate_limits": { 00:04:41.658 "r_mbytes_per_sec": 0, 00:04:41.658 "rw_ios_per_sec": 0, 00:04:41.658 "rw_mbytes_per_sec": 0, 00:04:41.658 "w_mbytes_per_sec": 0 00:04:41.658 }, 00:04:41.658 "block_size": 512, 00:04:41.658 "claim_type": "exclusive_write", 00:04:41.658 "claimed": true, 00:04:41.658 "driver_specific": {}, 00:04:41.658 "memory_domains": [ 00:04:41.658 { 00:04:41.658 "dma_device_id": "system", 00:04:41.658 "dma_device_type": 1 00:04:41.658 }, 00:04:41.658 { 00:04:41.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.658 "dma_device_type": 2 00:04:41.658 } 00:04:41.658 ], 00:04:41.658 "name": "Malloc3", 00:04:41.658 "num_blocks": 16384, 00:04:41.658 "product_name": "Malloc disk", 00:04:41.658 "supported_io_types": { 00:04:41.658 "abort": true, 00:04:41.658 "compare": false, 00:04:41.658 "compare_and_write": false, 00:04:41.658 "copy": true, 00:04:41.658 "flush": true, 00:04:41.658 "get_zone_info": false, 00:04:41.658 "nvme_admin": false, 00:04:41.658 "nvme_io": false, 00:04:41.658 "nvme_io_md": false, 00:04:41.658 "nvme_iov_md": false, 00:04:41.658 "read": true, 00:04:41.658 "reset": true, 00:04:41.658 "seek_data": false, 00:04:41.658 "seek_hole": false, 00:04:41.658 "unmap": true, 00:04:41.658 "write": true, 00:04:41.658 "write_zeroes": true, 00:04:41.658 "zcopy": true, 00:04:41.658 "zone_append": false, 00:04:41.658 "zone_management": false 00:04:41.658 }, 00:04:41.658 "uuid": "0f9bc1f6-cb99-4e22-9c1f-ad3e5c97df34", 00:04:41.658 "zoned": false 00:04:41.658 }, 00:04:41.658 { 00:04:41.658 "aliases": [ 00:04:41.658 "7c8fe9cd-75fa-5117-a76b-417048b435af" 00:04:41.658 ], 00:04:41.658 "assigned_rate_limits": { 00:04:41.658 "r_mbytes_per_sec": 0, 00:04:41.658 "rw_ios_per_sec": 0, 00:04:41.658 "rw_mbytes_per_sec": 0, 00:04:41.658 "w_mbytes_per_sec": 0 00:04:41.658 }, 00:04:41.658 "block_size": 512, 00:04:41.658 "claimed": false, 00:04:41.658 "driver_specific": { 00:04:41.658 "passthru": { 00:04:41.658 "base_bdev_name": "Malloc3", 00:04:41.658 "name": "Passthru0" 00:04:41.658 } 00:04:41.658 }, 00:04:41.658 "memory_domains": [ 00:04:41.658 { 00:04:41.658 "dma_device_id": "system", 00:04:41.658 "dma_device_type": 1 00:04:41.658 }, 00:04:41.658 { 00:04:41.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.658 "dma_device_type": 2 00:04:41.658 } 00:04:41.658 ], 00:04:41.658 "name": "Passthru0", 00:04:41.658 "num_blocks": 16384, 00:04:41.658 "product_name": "passthru", 00:04:41.658 "supported_io_types": { 00:04:41.658 "abort": true, 00:04:41.658 "compare": false, 00:04:41.658 "compare_and_write": false, 00:04:41.658 "copy": true, 00:04:41.658 "flush": true, 00:04:41.658 "get_zone_info": false, 00:04:41.658 "nvme_admin": false, 00:04:41.658 "nvme_io": false, 00:04:41.658 "nvme_io_md": false, 00:04:41.658 "nvme_iov_md": false, 00:04:41.658 "read": true, 00:04:41.658 "reset": true, 00:04:41.658 "seek_data": false, 00:04:41.658 "seek_hole": false, 00:04:41.658 "unmap": true, 00:04:41.658 "write": true, 00:04:41.658 "write_zeroes": true, 00:04:41.658 "zcopy": true, 00:04:41.658 "zone_append": false, 00:04:41.658 "zone_management": false 00:04:41.658 }, 00:04:41.658 "uuid": "7c8fe9cd-75fa-5117-a76b-417048b435af", 00:04:41.658 "zoned": false 00:04:41.658 } 00:04:41.658 ]' 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.658 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.917 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:41.917 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.917 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.917 08:58:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.917 00:04:41.917 real 0m0.323s 00:04:41.917 user 0m0.212s 00:04:41.917 sys 0m0.042s 00:04:41.917 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.917 08:58:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.917 ************************************ 00:04:41.917 END TEST rpc_daemon_integrity 00:04:41.917 ************************************ 00:04:41.917 08:58:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.917 08:58:20 rpc -- rpc/rpc.sh@84 -- # killprocess 58680 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 58680 ']' 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@958 -- # kill -0 58680 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58680 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.917 killing process with pid 58680 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58680' 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@973 -- # kill 58680 00:04:41.917 08:58:20 rpc -- common/autotest_common.sh@978 -- # wait 58680 00:04:42.485 00:04:42.485 real 0m3.302s 00:04:42.485 user 0m4.330s 00:04:42.485 sys 0m0.789s 00:04:42.485 08:58:21 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.485 08:58:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.485 ************************************ 00:04:42.485 END TEST rpc 00:04:42.485 ************************************ 00:04:42.485 08:58:21 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:42.485 08:58:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.485 08:58:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.485 08:58:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.485 ************************************ 00:04:42.485 START TEST skip_rpc 00:04:42.485 ************************************ 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:42.485 * Looking for test storage... 00:04:42.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.485 08:58:21 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.485 ' 00:04:42.485 08:58:21 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.485 --rc genhtml_branch_coverage=1 00:04:42.485 --rc genhtml_function_coverage=1 00:04:42.485 --rc genhtml_legend=1 00:04:42.485 --rc geninfo_all_blocks=1 00:04:42.485 --rc geninfo_unexecuted_blocks=1 00:04:42.485 00:04:42.486 ' 00:04:42.486 08:58:21 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.486 08:58:21 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.486 08:58:21 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:42.486 08:58:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.486 08:58:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.486 08:58:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 ************************************ 00:04:42.486 START TEST skip_rpc 00:04:42.486 ************************************ 00:04:42.486 08:58:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:42.486 08:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58955 00:04:42.486 08:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.486 08:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:42.486 08:58:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:42.745 [2024-11-20 08:58:21.472617] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:42.745 [2024-11-20 08:58:21.472823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:04:42.745 [2024-11-20 08:58:21.631997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.004 [2024-11-20 08:58:21.720639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.272 2024/11/20 08:58:26 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58955 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58955 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58955 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.272 killing process with pid 58955 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58955' 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58955 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58955 00:04:48.272 00:04:48.272 real 0m5.427s 00:04:48.272 user 0m5.011s 00:04:48.272 sys 0m0.311s 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.272 08:58:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.272 ************************************ 00:04:48.272 END TEST skip_rpc 00:04:48.272 ************************************ 00:04:48.272 08:58:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:48.272 08:58:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.272 08:58:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.272 08:58:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.272 ************************************ 00:04:48.272 START TEST skip_rpc_with_json 00:04:48.272 ************************************ 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59047 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59047 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59047 ']' 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.272 08:58:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.272 [2024-11-20 08:58:26.924155] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:48.273 [2024-11-20 08:58:26.924253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59047 ] 00:04:48.273 [2024-11-20 08:58:27.068593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.273 [2024-11-20 08:58:27.133685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.531 [2024-11-20 08:58:27.424083] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.531 2024/11/20 08:58:27 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:48.531 request: 00:04:48.531 { 00:04:48.531 "method": "nvmf_get_transports", 00:04:48.531 "params": { 00:04:48.531 "trtype": "tcp" 00:04:48.531 } 00:04:48.531 } 00:04:48.531 Got JSON-RPC error response 00:04:48.531 GoRPCClient: error on JSON-RPC call 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.531 [2024-11-20 08:58:27.436197] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.531 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.790 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.790 08:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.790 { 00:04:48.790 "subsystems": [ 00:04:48.790 { 00:04:48.790 "subsystem": "fsdev", 00:04:48.790 "config": [ 00:04:48.790 { 00:04:48.790 "method": "fsdev_set_opts", 00:04:48.790 "params": { 00:04:48.790 "fsdev_io_cache_size": 256, 00:04:48.790 "fsdev_io_pool_size": 65535 00:04:48.790 } 00:04:48.790 } 00:04:48.790 ] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "keyring", 00:04:48.790 "config": [] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "iobuf", 00:04:48.790 "config": [ 00:04:48.790 { 00:04:48.790 "method": "iobuf_set_options", 00:04:48.790 "params": { 00:04:48.790 "enable_numa": false, 00:04:48.790 "large_bufsize": 135168, 00:04:48.790 "large_pool_count": 1024, 00:04:48.790 "small_bufsize": 8192, 00:04:48.790 "small_pool_count": 8192 00:04:48.790 } 00:04:48.790 } 00:04:48.790 ] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "sock", 00:04:48.790 "config": [ 00:04:48.790 { 00:04:48.790 "method": "sock_set_default_impl", 00:04:48.790 "params": { 00:04:48.790 "impl_name": "posix" 00:04:48.790 } 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "method": "sock_impl_set_options", 00:04:48.790 "params": { 00:04:48.790 "enable_ktls": false, 00:04:48.790 "enable_placement_id": 0, 00:04:48.790 "enable_quickack": false, 00:04:48.790 "enable_recv_pipe": true, 00:04:48.790 "enable_zerocopy_send_client": false, 00:04:48.790 "enable_zerocopy_send_server": true, 00:04:48.790 "impl_name": "ssl", 00:04:48.790 "recv_buf_size": 4096, 00:04:48.790 "send_buf_size": 4096, 00:04:48.790 "tls_version": 0, 00:04:48.790 "zerocopy_threshold": 0 00:04:48.790 } 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "method": "sock_impl_set_options", 00:04:48.790 "params": { 00:04:48.790 "enable_ktls": false, 00:04:48.790 "enable_placement_id": 0, 00:04:48.790 "enable_quickack": false, 00:04:48.790 "enable_recv_pipe": true, 00:04:48.790 "enable_zerocopy_send_client": false, 00:04:48.790 "enable_zerocopy_send_server": true, 00:04:48.790 "impl_name": "posix", 00:04:48.790 "recv_buf_size": 2097152, 00:04:48.790 "send_buf_size": 2097152, 00:04:48.790 "tls_version": 0, 00:04:48.790 "zerocopy_threshold": 0 00:04:48.790 } 00:04:48.790 } 00:04:48.790 ] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "vmd", 00:04:48.790 "config": [] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "accel", 00:04:48.790 "config": [ 00:04:48.790 { 00:04:48.790 "method": "accel_set_options", 00:04:48.790 "params": { 00:04:48.790 "buf_count": 2048, 00:04:48.790 "large_cache_size": 16, 00:04:48.790 "sequence_count": 2048, 00:04:48.790 "small_cache_size": 128, 00:04:48.790 "task_count": 2048 00:04:48.790 } 00:04:48.790 } 00:04:48.790 ] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "bdev", 00:04:48.790 "config": [ 00:04:48.790 { 00:04:48.790 "method": "bdev_set_options", 00:04:48.790 "params": { 00:04:48.790 "bdev_auto_examine": true, 00:04:48.790 "bdev_io_cache_size": 256, 00:04:48.790 "bdev_io_pool_size": 65535, 00:04:48.790 "iobuf_large_cache_size": 16, 00:04:48.790 "iobuf_small_cache_size": 128 00:04:48.790 } 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "method": "bdev_raid_set_options", 00:04:48.790 "params": { 00:04:48.790 "process_max_bandwidth_mb_sec": 0, 00:04:48.790 "process_window_size_kb": 1024 00:04:48.790 } 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "method": "bdev_iscsi_set_options", 00:04:48.790 "params": { 00:04:48.790 "timeout_sec": 30 00:04:48.790 } 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "method": "bdev_nvme_set_options", 00:04:48.790 "params": { 00:04:48.790 "action_on_timeout": "none", 00:04:48.790 "allow_accel_sequence": false, 00:04:48.790 "arbitration_burst": 0, 00:04:48.790 "bdev_retry_count": 3, 00:04:48.790 "ctrlr_loss_timeout_sec": 0, 00:04:48.790 "delay_cmd_submit": true, 00:04:48.790 "dhchap_dhgroups": [ 00:04:48.790 "null", 00:04:48.790 "ffdhe2048", 00:04:48.790 "ffdhe3072", 00:04:48.790 "ffdhe4096", 00:04:48.790 "ffdhe6144", 00:04:48.790 "ffdhe8192" 00:04:48.790 ], 00:04:48.790 "dhchap_digests": [ 00:04:48.790 "sha256", 00:04:48.790 "sha384", 00:04:48.790 "sha512" 00:04:48.790 ], 00:04:48.790 "disable_auto_failback": false, 00:04:48.790 "fast_io_fail_timeout_sec": 0, 00:04:48.790 "generate_uuids": false, 00:04:48.790 "high_priority_weight": 0, 00:04:48.790 "io_path_stat": false, 00:04:48.790 "io_queue_requests": 0, 00:04:48.790 "keep_alive_timeout_ms": 10000, 00:04:48.790 "low_priority_weight": 0, 00:04:48.790 "medium_priority_weight": 0, 00:04:48.790 "nvme_adminq_poll_period_us": 10000, 00:04:48.790 "nvme_error_stat": false, 00:04:48.790 "nvme_ioq_poll_period_us": 0, 00:04:48.790 "rdma_cm_event_timeout_ms": 0, 00:04:48.790 "rdma_max_cq_size": 0, 00:04:48.790 "rdma_srq_size": 0, 00:04:48.790 "reconnect_delay_sec": 0, 00:04:48.790 "timeout_admin_us": 0, 00:04:48.790 "timeout_us": 0, 00:04:48.790 "transport_ack_timeout": 0, 00:04:48.790 "transport_retry_count": 4, 00:04:48.790 "transport_tos": 0 00:04:48.790 } 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "method": "bdev_nvme_set_hotplug", 00:04:48.790 "params": { 00:04:48.790 "enable": false, 00:04:48.790 "period_us": 100000 00:04:48.790 } 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "method": "bdev_wait_for_examine" 00:04:48.790 } 00:04:48.790 ] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "scsi", 00:04:48.790 "config": null 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "scheduler", 00:04:48.790 "config": [ 00:04:48.790 { 00:04:48.790 "method": "framework_set_scheduler", 00:04:48.790 "params": { 00:04:48.790 "name": "static" 00:04:48.790 } 00:04:48.790 } 00:04:48.790 ] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "vhost_scsi", 00:04:48.790 "config": [] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "vhost_blk", 00:04:48.790 "config": [] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "ublk", 00:04:48.790 "config": [] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "nbd", 00:04:48.790 "config": [] 00:04:48.790 }, 00:04:48.790 { 00:04:48.790 "subsystem": "nvmf", 00:04:48.790 "config": [ 00:04:48.790 { 00:04:48.790 "method": "nvmf_set_config", 00:04:48.790 "params": { 00:04:48.790 "admin_cmd_passthru": { 00:04:48.790 "identify_ctrlr": false 00:04:48.790 }, 00:04:48.790 "dhchap_dhgroups": [ 00:04:48.790 "null", 00:04:48.790 "ffdhe2048", 00:04:48.790 "ffdhe3072", 00:04:48.790 "ffdhe4096", 00:04:48.790 "ffdhe6144", 00:04:48.791 "ffdhe8192" 00:04:48.791 ], 00:04:48.791 "dhchap_digests": [ 00:04:48.791 "sha256", 00:04:48.791 "sha384", 00:04:48.791 "sha512" 00:04:48.791 ], 00:04:48.791 "discovery_filter": "match_any" 00:04:48.791 } 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "method": "nvmf_set_max_subsystems", 00:04:48.791 "params": { 00:04:48.791 "max_subsystems": 1024 00:04:48.791 } 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "method": "nvmf_set_crdt", 00:04:48.791 "params": { 00:04:48.791 "crdt1": 0, 00:04:48.791 "crdt2": 0, 00:04:48.791 "crdt3": 0 00:04:48.791 } 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "method": "nvmf_create_transport", 00:04:48.791 "params": { 00:04:48.791 "abort_timeout_sec": 1, 00:04:48.791 "ack_timeout": 0, 00:04:48.791 "buf_cache_size": 4294967295, 00:04:48.791 "c2h_success": true, 00:04:48.791 "data_wr_pool_size": 0, 00:04:48.791 "dif_insert_or_strip": false, 00:04:48.791 "in_capsule_data_size": 4096, 00:04:48.791 "io_unit_size": 131072, 00:04:48.791 "max_aq_depth": 128, 00:04:48.791 "max_io_qpairs_per_ctrlr": 127, 00:04:48.791 "max_io_size": 131072, 00:04:48.791 "max_queue_depth": 128, 00:04:48.791 "num_shared_buffers": 511, 00:04:48.791 "sock_priority": 0, 00:04:48.791 "trtype": "TCP", 00:04:48.791 "zcopy": false 00:04:48.791 } 00:04:48.791 } 00:04:48.791 ] 00:04:48.791 }, 00:04:48.791 { 00:04:48.791 "subsystem": "iscsi", 00:04:48.791 "config": [ 00:04:48.791 { 00:04:48.791 "method": "iscsi_set_options", 00:04:48.791 "params": { 00:04:48.791 "allow_duplicated_isid": false, 00:04:48.791 "chap_group": 0, 00:04:48.791 "data_out_pool_size": 2048, 00:04:48.791 "default_time2retain": 20, 00:04:48.791 "default_time2wait": 2, 00:04:48.791 "disable_chap": false, 00:04:48.791 "error_recovery_level": 0, 00:04:48.791 "first_burst_length": 8192, 00:04:48.791 "immediate_data": true, 00:04:48.791 "immediate_data_pool_size": 16384, 00:04:48.791 "max_connections_per_session": 2, 00:04:48.791 "max_large_datain_per_connection": 64, 00:04:48.791 "max_queue_depth": 64, 00:04:48.791 "max_r2t_per_connection": 4, 00:04:48.791 "max_sessions": 128, 00:04:48.791 "mutual_chap": false, 00:04:48.791 "node_base": "iqn.2016-06.io.spdk", 00:04:48.791 "nop_in_interval": 30, 00:04:48.791 "nop_timeout": 60, 00:04:48.791 "pdu_pool_size": 36864, 00:04:48.791 "require_chap": false 00:04:48.791 } 00:04:48.791 } 00:04:48.791 ] 00:04:48.791 } 00:04:48.791 ] 00:04:48.791 } 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59047 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59047 ']' 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59047 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59047 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.791 killing process with pid 59047 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59047' 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59047 00:04:48.791 08:58:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59047 00:04:49.358 08:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59073 00:04:49.358 08:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:49.358 08:58:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59073 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59073 ']' 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59073 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59073 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.629 killing process with pid 59073 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59073' 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59073 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59073 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.629 00:04:54.629 real 0m6.637s 00:04:54.629 user 0m6.161s 00:04:54.629 sys 0m0.656s 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.629 08:58:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.629 ************************************ 00:04:54.629 END TEST skip_rpc_with_json 00:04:54.629 ************************************ 00:04:54.629 08:58:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:54.887 08:58:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.887 08:58:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.887 08:58:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.887 ************************************ 00:04:54.887 START TEST skip_rpc_with_delay 00:04:54.887 ************************************ 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.887 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:54.888 [2024-11-20 08:58:33.635242] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.888 ************************************ 00:04:54.888 END TEST skip_rpc_with_delay 00:04:54.888 ************************************ 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.888 00:04:54.888 real 0m0.102s 00:04:54.888 user 0m0.069s 00:04:54.888 sys 0m0.030s 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.888 08:58:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:54.888 08:58:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:54.888 08:58:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:54.888 08:58:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:54.888 08:58:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.888 08:58:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.888 08:58:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.888 ************************************ 00:04:54.888 START TEST exit_on_failed_rpc_init 00:04:54.888 ************************************ 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59183 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59183 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59183 ']' 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.888 08:58:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.888 [2024-11-20 08:58:33.784862] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:54.888 [2024-11-20 08:58:33.784978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59183 ] 00:04:55.147 [2024-11-20 08:58:33.933533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.147 [2024-11-20 08:58:33.997827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.083 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.084 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.084 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.084 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.084 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.084 08:58:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.084 [2024-11-20 08:58:34.934097] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:56.084 [2024-11-20 08:58:34.934257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59213 ] 00:04:56.342 [2024-11-20 08:58:35.092528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.342 [2024-11-20 08:58:35.160259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.342 [2024-11-20 08:58:35.160374] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.342 [2024-11-20 08:58:35.160392] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.342 [2024-11-20 08:58:35.160402] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59183 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59183 ']' 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59183 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.342 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59183 00:04:56.602 killing process with pid 59183 00:04:56.602 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.602 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.602 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59183' 00:04:56.602 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59183 00:04:56.602 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59183 00:04:56.860 00:04:56.860 real 0m1.971s 00:04:56.860 user 0m2.357s 00:04:56.860 sys 0m0.453s 00:04:56.860 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.860 08:58:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.860 ************************************ 00:04:56.860 END TEST exit_on_failed_rpc_init 00:04:56.860 ************************************ 00:04:56.860 08:58:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:56.860 00:04:56.860 real 0m14.546s 00:04:56.860 user 0m13.770s 00:04:56.860 sys 0m1.673s 00:04:56.861 08:58:35 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.861 08:58:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.861 ************************************ 00:04:56.861 END TEST skip_rpc 00:04:56.861 ************************************ 00:04:56.861 08:58:35 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:56.861 08:58:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.861 08:58:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.861 08:58:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.861 ************************************ 00:04:56.861 START TEST rpc_client 00:04:56.861 ************************************ 00:04:56.861 08:58:35 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:57.120 * Looking for test storage... 00:04:57.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.120 08:58:35 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.120 --rc genhtml_branch_coverage=1 00:04:57.120 --rc genhtml_function_coverage=1 00:04:57.120 --rc genhtml_legend=1 00:04:57.120 --rc geninfo_all_blocks=1 00:04:57.120 --rc geninfo_unexecuted_blocks=1 00:04:57.120 00:04:57.120 ' 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.120 --rc genhtml_branch_coverage=1 00:04:57.120 --rc genhtml_function_coverage=1 00:04:57.120 --rc genhtml_legend=1 00:04:57.120 --rc geninfo_all_blocks=1 00:04:57.120 --rc geninfo_unexecuted_blocks=1 00:04:57.120 00:04:57.120 ' 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.120 --rc genhtml_branch_coverage=1 00:04:57.120 --rc genhtml_function_coverage=1 00:04:57.120 --rc genhtml_legend=1 00:04:57.120 --rc geninfo_all_blocks=1 00:04:57.120 --rc geninfo_unexecuted_blocks=1 00:04:57.120 00:04:57.120 ' 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.120 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.120 --rc genhtml_branch_coverage=1 00:04:57.120 --rc genhtml_function_coverage=1 00:04:57.120 --rc genhtml_legend=1 00:04:57.120 --rc geninfo_all_blocks=1 00:04:57.120 --rc geninfo_unexecuted_blocks=1 00:04:57.120 00:04:57.120 ' 00:04:57.120 08:58:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:57.120 OK 00:04:57.120 08:58:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:57.120 00:04:57.120 real 0m0.198s 00:04:57.120 user 0m0.116s 00:04:57.120 sys 0m0.092s 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.120 08:58:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:57.120 ************************************ 00:04:57.120 END TEST rpc_client 00:04:57.120 ************************************ 00:04:57.120 08:58:36 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:57.120 08:58:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.120 08:58:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.120 08:58:36 -- common/autotest_common.sh@10 -- # set +x 00:04:57.120 ************************************ 00:04:57.120 START TEST json_config 00:04:57.120 ************************************ 00:04:57.120 08:58:36 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.381 08:58:36 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.381 08:58:36 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.381 08:58:36 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.381 08:58:36 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.381 08:58:36 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.381 08:58:36 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.381 08:58:36 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.381 08:58:36 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:57.381 08:58:36 json_config -- scripts/common.sh@345 -- # : 1 00:04:57.381 08:58:36 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.381 08:58:36 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.381 08:58:36 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:57.381 08:58:36 json_config -- scripts/common.sh@353 -- # local d=1 00:04:57.381 08:58:36 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.381 08:58:36 json_config -- scripts/common.sh@355 -- # echo 1 00:04:57.381 08:58:36 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.381 08:58:36 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@353 -- # local d=2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.381 08:58:36 json_config -- scripts/common.sh@355 -- # echo 2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.381 08:58:36 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.381 08:58:36 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.381 08:58:36 json_config -- scripts/common.sh@368 -- # return 0 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.381 --rc genhtml_branch_coverage=1 00:04:57.381 --rc genhtml_function_coverage=1 00:04:57.381 --rc genhtml_legend=1 00:04:57.381 --rc geninfo_all_blocks=1 00:04:57.381 --rc geninfo_unexecuted_blocks=1 00:04:57.381 00:04:57.381 ' 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.381 --rc genhtml_branch_coverage=1 00:04:57.381 --rc genhtml_function_coverage=1 00:04:57.381 --rc genhtml_legend=1 00:04:57.381 --rc geninfo_all_blocks=1 00:04:57.381 --rc geninfo_unexecuted_blocks=1 00:04:57.381 00:04:57.381 ' 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.381 --rc genhtml_branch_coverage=1 00:04:57.381 --rc genhtml_function_coverage=1 00:04:57.381 --rc genhtml_legend=1 00:04:57.381 --rc geninfo_all_blocks=1 00:04:57.381 --rc geninfo_unexecuted_blocks=1 00:04:57.381 00:04:57.381 ' 00:04:57.381 08:58:36 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.381 --rc genhtml_branch_coverage=1 00:04:57.381 --rc genhtml_function_coverage=1 00:04:57.381 --rc genhtml_legend=1 00:04:57.381 --rc geninfo_all_blocks=1 00:04:57.381 --rc geninfo_unexecuted_blocks=1 00:04:57.381 00:04:57.381 ' 00:04:57.381 08:58:36 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.381 08:58:36 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:57.381 08:58:36 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.381 08:58:36 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.381 08:58:36 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.381 08:58:36 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.381 08:58:36 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.381 08:58:36 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.382 08:58:36 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.382 08:58:36 json_config -- paths/export.sh@5 -- # export PATH 00:04:57.382 08:58:36 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:04:57.382 08:58:36 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:57.382 08:58:36 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:57.382 08:58:36 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@50 -- # : 0 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:57.382 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:57.382 08:58:36 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:57.382 INFO: JSON configuration test init 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.382 08:58:36 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:57.382 08:58:36 json_config -- json_config/common.sh@9 -- # local app=target 00:04:57.382 08:58:36 json_config -- json_config/common.sh@10 -- # shift 00:04:57.382 08:58:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:57.382 08:58:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:57.382 08:58:36 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:57.382 08:58:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.382 08:58:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:57.382 08:58:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59352 00:04:57.382 Waiting for target to run... 00:04:57.382 08:58:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:57.382 08:58:36 json_config -- json_config/common.sh@25 -- # waitforlisten 59352 /var/tmp/spdk_tgt.sock 00:04:57.382 08:58:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 59352 ']' 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.382 08:58:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.382 [2024-11-20 08:58:36.283740] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:57.382 [2024-11-20 08:58:36.283896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59352 ] 00:04:57.954 [2024-11-20 08:58:36.739325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.954 [2024-11-20 08:58:36.786077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.521 08:58:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:58.521 08:58:37 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:58.521 00:04:58.521 08:58:37 json_config -- json_config/common.sh@26 -- # echo '' 00:04:58.521 08:58:37 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:58.521 08:58:37 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:58.521 08:58:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.521 08:58:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.521 08:58:37 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:58.521 08:58:37 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:58.521 08:58:37 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:58.521 08:58:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.521 08:58:37 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:58.521 08:58:37 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:58.521 08:58:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:59.089 08:58:37 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.089 08:58:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:59.089 08:58:37 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:59.089 08:58:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@54 -- # sort 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:59.348 08:58:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:59.348 08:58:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:59.348 08:58:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:59.348 08:58:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:59.348 08:58:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.348 08:58:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:59.915 MallocForNvmf0 00:04:59.915 08:58:38 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:59.915 08:58:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:00.174 MallocForNvmf1 00:05:00.174 08:58:38 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.174 08:58:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:00.433 [2024-11-20 08:58:39.194176] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.433 08:58:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.433 08:58:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:00.692 08:58:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.692 08:58:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:00.951 08:58:39 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:00.951 08:58:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:01.209 08:58:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.209 08:58:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:01.468 [2024-11-20 08:58:40.374914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:01.726 08:58:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:01.726 08:58:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.726 08:58:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.726 08:58:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:01.726 08:58:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.726 08:58:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.726 08:58:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:01.726 08:58:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.726 08:58:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:01.985 MallocBdevForConfigChangeCheck 00:05:01.985 08:58:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:01.985 08:58:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.985 08:58:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.985 08:58:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:01.985 08:58:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.552 INFO: shutting down applications... 00:05:02.552 08:58:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:02.552 08:58:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:02.552 08:58:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:02.552 08:58:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:02.552 08:58:41 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:02.810 Calling clear_iscsi_subsystem 00:05:02.810 Calling clear_nvmf_subsystem 00:05:02.810 Calling clear_nbd_subsystem 00:05:02.810 Calling clear_ublk_subsystem 00:05:02.810 Calling clear_vhost_blk_subsystem 00:05:02.810 Calling clear_vhost_scsi_subsystem 00:05:02.810 Calling clear_bdev_subsystem 00:05:02.810 08:58:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:02.810 08:58:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:02.810 08:58:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:02.810 08:58:41 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.810 08:58:41 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:02.810 08:58:41 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:03.069 08:58:41 json_config -- json_config/json_config.sh@352 -- # break 00:05:03.069 08:58:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:03.069 08:58:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:03.069 08:58:41 json_config -- json_config/common.sh@31 -- # local app=target 00:05:03.069 08:58:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:03.069 08:58:41 json_config -- json_config/common.sh@35 -- # [[ -n 59352 ]] 00:05:03.069 08:58:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59352 00:05:03.069 08:58:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:03.069 08:58:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.069 08:58:41 json_config -- json_config/common.sh@41 -- # kill -0 59352 00:05:03.069 08:58:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.636 08:58:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.636 08:58:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.636 08:58:42 json_config -- json_config/common.sh@41 -- # kill -0 59352 00:05:03.636 08:58:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:03.636 08:58:42 json_config -- json_config/common.sh@43 -- # break 00:05:03.636 08:58:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:03.636 SPDK target shutdown done 00:05:03.636 08:58:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:03.636 INFO: relaunching applications... 00:05:03.636 08:58:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:03.636 08:58:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.636 08:58:42 json_config -- json_config/common.sh@9 -- # local app=target 00:05:03.636 08:58:42 json_config -- json_config/common.sh@10 -- # shift 00:05:03.636 08:58:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.636 08:58:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.636 08:58:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.636 08:58:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.636 08:58:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.636 08:58:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59637 00:05:03.636 Waiting for target to run... 00:05:03.636 08:58:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.636 08:58:42 json_config -- json_config/common.sh@25 -- # waitforlisten 59637 /var/tmp/spdk_tgt.sock 00:05:03.636 08:58:42 json_config -- common/autotest_common.sh@835 -- # '[' -z 59637 ']' 00:05:03.636 08:58:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:03.636 08:58:42 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.636 08:58:42 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.636 08:58:42 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.636 08:58:42 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.636 08:58:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.636 [2024-11-20 08:58:42.542119] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:03.636 [2024-11-20 08:58:42.542215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59637 ] 00:05:04.204 [2024-11-20 08:58:42.970062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.204 [2024-11-20 08:58:43.023605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.769 [2024-11-20 08:58:43.380394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.769 [2024-11-20 08:58:43.412500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:04.769 08:58:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.769 08:58:43 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:04.769 00:05:04.769 08:58:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:04.769 08:58:43 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:04.769 INFO: Checking if target configuration is the same... 00:05:04.769 08:58:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:04.769 08:58:43 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:04.769 08:58:43 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.769 08:58:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.769 + '[' 2 -ne 2 ']' 00:05:04.769 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:04.769 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:04.769 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:04.769 +++ basename /dev/fd/62 00:05:04.769 ++ mktemp /tmp/62.XXX 00:05:04.769 + tmp_file_1=/tmp/62.UY6 00:05:04.769 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:04.769 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:04.769 + tmp_file_2=/tmp/spdk_tgt_config.json.Pgd 00:05:04.769 + ret=0 00:05:04.769 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:05.336 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:05.336 + diff -u /tmp/62.UY6 /tmp/spdk_tgt_config.json.Pgd 00:05:05.336 INFO: JSON config files are the same 00:05:05.336 + echo 'INFO: JSON config files are the same' 00:05:05.336 + rm /tmp/62.UY6 /tmp/spdk_tgt_config.json.Pgd 00:05:05.336 + exit 0 00:05:05.336 08:58:44 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:05.336 INFO: changing configuration and checking if this can be detected... 00:05:05.336 08:58:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:05.336 08:58:44 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.336 08:58:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:05.594 08:58:44 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:05.594 08:58:44 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:05.594 08:58:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.594 + '[' 2 -ne 2 ']' 00:05:05.594 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:05.594 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:05.594 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:05.852 +++ basename /dev/fd/62 00:05:05.852 ++ mktemp /tmp/62.XXX 00:05:05.852 + tmp_file_1=/tmp/62.5Vf 00:05:05.852 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:05.852 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:05.852 + tmp_file_2=/tmp/spdk_tgt_config.json.Kmr 00:05:05.852 + ret=0 00:05:05.852 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:06.110 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:06.110 + diff -u /tmp/62.5Vf /tmp/spdk_tgt_config.json.Kmr 00:05:06.110 + ret=1 00:05:06.110 + echo '=== Start of file: /tmp/62.5Vf ===' 00:05:06.110 + cat /tmp/62.5Vf 00:05:06.110 + echo '=== End of file: /tmp/62.5Vf ===' 00:05:06.110 + echo '' 00:05:06.110 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Kmr ===' 00:05:06.110 + cat /tmp/spdk_tgt_config.json.Kmr 00:05:06.110 + echo '=== End of file: /tmp/spdk_tgt_config.json.Kmr ===' 00:05:06.110 + echo '' 00:05:06.110 + rm /tmp/62.5Vf /tmp/spdk_tgt_config.json.Kmr 00:05:06.110 + exit 1 00:05:06.110 INFO: configuration change detected. 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:06.110 08:58:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.110 08:58:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 59637 ]] 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:06.110 08:58:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:06.110 08:58:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:06.110 08:58:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:06.110 08:58:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.110 08:58:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.368 08:58:45 json_config -- json_config/json_config.sh@330 -- # killprocess 59637 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@954 -- # '[' -z 59637 ']' 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@958 -- # kill -0 59637 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@959 -- # uname 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59637 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.368 killing process with pid 59637 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59637' 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@973 -- # kill 59637 00:05:06.368 08:58:45 json_config -- common/autotest_common.sh@978 -- # wait 59637 00:05:06.626 08:58:45 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:06.626 08:58:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:06.626 08:58:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:06.626 08:58:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.626 08:58:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:06.626 INFO: Success 00:05:06.626 08:58:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:06.626 00:05:06.626 real 0m9.363s 00:05:06.626 user 0m13.672s 00:05:06.626 sys 0m1.952s 00:05:06.626 08:58:45 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.626 08:58:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.626 ************************************ 00:05:06.626 END TEST json_config 00:05:06.626 ************************************ 00:05:06.626 08:58:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:06.626 08:58:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.626 08:58:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.626 08:58:45 -- common/autotest_common.sh@10 -- # set +x 00:05:06.626 ************************************ 00:05:06.626 START TEST json_config_extra_key 00:05:06.626 ************************************ 00:05:06.626 08:58:45 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:06.626 08:58:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:06.626 08:58:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:06.626 08:58:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:06.885 08:58:45 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:06.885 08:58:45 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.885 08:58:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:06.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.885 --rc genhtml_branch_coverage=1 00:05:06.885 --rc genhtml_function_coverage=1 00:05:06.885 --rc genhtml_legend=1 00:05:06.885 --rc geninfo_all_blocks=1 00:05:06.885 --rc geninfo_unexecuted_blocks=1 00:05:06.885 00:05:06.885 ' 00:05:06.885 08:58:45 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:06.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.885 --rc genhtml_branch_coverage=1 00:05:06.885 --rc genhtml_function_coverage=1 00:05:06.885 --rc genhtml_legend=1 00:05:06.885 --rc geninfo_all_blocks=1 00:05:06.885 --rc geninfo_unexecuted_blocks=1 00:05:06.885 00:05:06.885 ' 00:05:06.885 08:58:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:06.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.885 --rc genhtml_branch_coverage=1 00:05:06.885 --rc genhtml_function_coverage=1 00:05:06.885 --rc genhtml_legend=1 00:05:06.885 --rc geninfo_all_blocks=1 00:05:06.885 --rc geninfo_unexecuted_blocks=1 00:05:06.885 00:05:06.885 ' 00:05:06.885 08:58:45 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:06.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.885 --rc genhtml_branch_coverage=1 00:05:06.885 --rc genhtml_function_coverage=1 00:05:06.885 --rc genhtml_legend=1 00:05:06.885 --rc geninfo_all_blocks=1 00:05:06.885 --rc geninfo_unexecuted_blocks=1 00:05:06.885 00:05:06.885 ' 00:05:06.885 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.885 08:58:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.885 08:58:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.885 08:58:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.885 08:58:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.885 08:58:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:06.885 08:58:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:06.885 08:58:45 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:06.886 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:06.886 08:58:45 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:06.886 INFO: launching applications... 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:06.886 08:58:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59821 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.886 Waiting for target to run... 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59821 /var/tmp/spdk_tgt.sock 00:05:06.886 08:58:45 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59821 ']' 00:05:06.886 08:58:45 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.886 08:58:45 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.886 08:58:45 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.886 08:58:45 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.886 08:58:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.886 08:58:45 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:06.886 [2024-11-20 08:58:45.703162] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:06.886 [2024-11-20 08:58:45.703337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59821 ] 00:05:07.453 [2024-11-20 08:58:46.172686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.453 [2024-11-20 08:58:46.228136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.019 08:58:46 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.019 00:05:08.019 08:58:46 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:08.019 INFO: shutting down applications... 00:05:08.019 08:58:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:08.019 08:58:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59821 ]] 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59821 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59821 00:05:08.019 08:58:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.279 08:58:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.279 08:58:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.279 08:58:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59821 00:05:08.279 08:58:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.279 08:58:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:08.279 08:58:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.279 SPDK target shutdown done 00:05:08.279 08:58:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.279 Success 00:05:08.279 08:58:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:08.279 00:05:08.279 real 0m1.738s 00:05:08.279 user 0m1.552s 00:05:08.279 sys 0m0.531s 00:05:08.279 08:58:47 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.279 ************************************ 00:05:08.279 END TEST json_config_extra_key 00:05:08.279 08:58:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.279 ************************************ 00:05:08.538 08:58:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.538 08:58:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.539 08:58:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.539 08:58:47 -- common/autotest_common.sh@10 -- # set +x 00:05:08.539 ************************************ 00:05:08.539 START TEST alias_rpc 00:05:08.539 ************************************ 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:08.539 * Looking for test storage... 00:05:08.539 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.539 08:58:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.539 --rc genhtml_branch_coverage=1 00:05:08.539 --rc genhtml_function_coverage=1 00:05:08.539 --rc genhtml_legend=1 00:05:08.539 --rc geninfo_all_blocks=1 00:05:08.539 --rc geninfo_unexecuted_blocks=1 00:05:08.539 00:05:08.539 ' 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.539 --rc genhtml_branch_coverage=1 00:05:08.539 --rc genhtml_function_coverage=1 00:05:08.539 --rc genhtml_legend=1 00:05:08.539 --rc geninfo_all_blocks=1 00:05:08.539 --rc geninfo_unexecuted_blocks=1 00:05:08.539 00:05:08.539 ' 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.539 --rc genhtml_branch_coverage=1 00:05:08.539 --rc genhtml_function_coverage=1 00:05:08.539 --rc genhtml_legend=1 00:05:08.539 --rc geninfo_all_blocks=1 00:05:08.539 --rc geninfo_unexecuted_blocks=1 00:05:08.539 00:05:08.539 ' 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:08.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.539 --rc genhtml_branch_coverage=1 00:05:08.539 --rc genhtml_function_coverage=1 00:05:08.539 --rc genhtml_legend=1 00:05:08.539 --rc geninfo_all_blocks=1 00:05:08.539 --rc geninfo_unexecuted_blocks=1 00:05:08.539 00:05:08.539 ' 00:05:08.539 08:58:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:08.539 08:58:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59906 00:05:08.539 08:58:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59906 00:05:08.539 08:58:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59906 ']' 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.539 08:58:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.797 [2024-11-20 08:58:47.495809] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:08.797 [2024-11-20 08:58:47.495957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59906 ] 00:05:08.797 [2024-11-20 08:58:47.644889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.797 [2024-11-20 08:58:47.712510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.735 08:58:48 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.735 08:58:48 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:09.735 08:58:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:09.994 08:58:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59906 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59906 ']' 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59906 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59906 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.994 killing process with pid 59906 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59906' 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@973 -- # kill 59906 00:05:09.994 08:58:48 alias_rpc -- common/autotest_common.sh@978 -- # wait 59906 00:05:10.560 00:05:10.560 real 0m2.077s 00:05:10.560 user 0m2.354s 00:05:10.560 sys 0m0.538s 00:05:10.560 08:58:49 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.560 08:58:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.560 ************************************ 00:05:10.560 END TEST alias_rpc 00:05:10.560 ************************************ 00:05:10.560 08:58:49 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:05:10.560 08:58:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.560 08:58:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.560 08:58:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.560 08:58:49 -- common/autotest_common.sh@10 -- # set +x 00:05:10.560 ************************************ 00:05:10.560 START TEST dpdk_mem_utility 00:05:10.560 ************************************ 00:05:10.560 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:10.560 * Looking for test storage... 00:05:10.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:10.560 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:10.561 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:10.561 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.819 08:58:49 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.819 --rc genhtml_branch_coverage=1 00:05:10.819 --rc genhtml_function_coverage=1 00:05:10.819 --rc genhtml_legend=1 00:05:10.819 --rc geninfo_all_blocks=1 00:05:10.819 --rc geninfo_unexecuted_blocks=1 00:05:10.819 00:05:10.819 ' 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.819 --rc genhtml_branch_coverage=1 00:05:10.819 --rc genhtml_function_coverage=1 00:05:10.819 --rc genhtml_legend=1 00:05:10.819 --rc geninfo_all_blocks=1 00:05:10.819 --rc geninfo_unexecuted_blocks=1 00:05:10.819 00:05:10.819 ' 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.819 --rc genhtml_branch_coverage=1 00:05:10.819 --rc genhtml_function_coverage=1 00:05:10.819 --rc genhtml_legend=1 00:05:10.819 --rc geninfo_all_blocks=1 00:05:10.819 --rc geninfo_unexecuted_blocks=1 00:05:10.819 00:05:10.819 ' 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:10.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.819 --rc genhtml_branch_coverage=1 00:05:10.819 --rc genhtml_function_coverage=1 00:05:10.819 --rc genhtml_legend=1 00:05:10.819 --rc geninfo_all_blocks=1 00:05:10.819 --rc geninfo_unexecuted_blocks=1 00:05:10.819 00:05:10.819 ' 00:05:10.819 08:58:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:10.819 08:58:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60006 00:05:10.819 08:58:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.819 08:58:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60006 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 60006 ']' 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.819 08:58:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.819 [2024-11-20 08:58:49.622202] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:10.819 [2024-11-20 08:58:49.622354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60006 ] 00:05:11.078 [2024-11-20 08:58:49.772593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.078 [2024-11-20 08:58:49.845256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.337 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.337 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:11.337 08:58:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:11.337 08:58:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:11.337 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.337 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:11.337 { 00:05:11.337 "filename": "/tmp/spdk_mem_dump.txt" 00:05:11.337 } 00:05:11.337 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.337 08:58:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:11.337 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:11.337 1 heaps totaling size 810.000000 MiB 00:05:11.337 size: 810.000000 MiB heap id: 0 00:05:11.337 end heaps---------- 00:05:11.337 9 mempools totaling size 595.772034 MiB 00:05:11.337 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:11.337 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:11.337 size: 92.545471 MiB name: bdev_io_60006 00:05:11.337 size: 50.003479 MiB name: msgpool_60006 00:05:11.337 size: 36.509338 MiB name: fsdev_io_60006 00:05:11.337 size: 21.763794 MiB name: PDU_Pool 00:05:11.337 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:11.337 size: 4.133484 MiB name: evtpool_60006 00:05:11.337 size: 0.026123 MiB name: Session_Pool 00:05:11.337 end mempools------- 00:05:11.337 6 memzones totaling size 4.142822 MiB 00:05:11.337 size: 1.000366 MiB name: RG_ring_0_60006 00:05:11.337 size: 1.000366 MiB name: RG_ring_1_60006 00:05:11.337 size: 1.000366 MiB name: RG_ring_4_60006 00:05:11.337 size: 1.000366 MiB name: RG_ring_5_60006 00:05:11.337 size: 0.125366 MiB name: RG_ring_2_60006 00:05:11.337 size: 0.015991 MiB name: RG_ring_3_60006 00:05:11.337 end memzones------- 00:05:11.337 08:58:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:11.598 heap id: 0 total size: 810.000000 MiB number of busy elements: 218 number of free elements: 15 00:05:11.598 list of free elements. size: 10.830627 MiB 00:05:11.598 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:11.598 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:11.598 element at address: 0x200000400000 with size: 0.996155 MiB 00:05:11.598 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:11.598 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:11.598 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:11.598 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:11.598 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:11.598 element at address: 0x20001a600000 with size: 0.572449 MiB 00:05:11.598 element at address: 0x200000c00000 with size: 0.491211 MiB 00:05:11.598 element at address: 0x20000a600000 with size: 0.489990 MiB 00:05:11.598 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:11.598 element at address: 0x200003e00000 with size: 0.481201 MiB 00:05:11.598 element at address: 0x200027a00000 with size: 0.398315 MiB 00:05:11.598 element at address: 0x200000800000 with size: 0.353394 MiB 00:05:11.598 list of standard malloc elements. size: 199.250488 MiB 00:05:11.598 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:11.598 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:11.598 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:11.598 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:11.598 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:11.598 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:11.598 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:11.598 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:11.598 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:11.598 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:11.598 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000085a780 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000085a980 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:11.599 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:11.599 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:11.599 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:11.599 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a65f80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a66040 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6cc40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:11.599 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:11.600 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:11.600 list of memzone associated elements. size: 599.918884 MiB 00:05:11.600 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:11.600 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:11.600 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:11.600 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:11.600 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:11.600 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_60006_0 00:05:11.600 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:11.600 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60006_0 00:05:11.600 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:11.600 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_60006_0 00:05:11.600 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:11.600 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:11.600 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:11.600 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:11.600 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:11.600 associated memzone info: size: 3.000122 MiB name: MP_evtpool_60006_0 00:05:11.600 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:11.600 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60006 00:05:11.600 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:11.600 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60006 00:05:11.600 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:11.600 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:11.600 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:11.600 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:11.600 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:11.600 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:11.600 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:11.600 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:11.600 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:11.600 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60006 00:05:11.600 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:11.600 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60006 00:05:11.600 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:11.600 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60006 00:05:11.600 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:11.600 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60006 00:05:11.600 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:11.600 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_60006 00:05:11.600 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:11.600 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60006 00:05:11.600 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:11.600 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:11.600 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:11.600 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:11.600 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:11.600 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:11.600 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:11.600 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_60006 00:05:11.600 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:05:11.600 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60006 00:05:11.600 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:11.600 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:11.600 element at address: 0x200027a66100 with size: 0.023743 MiB 00:05:11.600 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:11.600 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:05:11.600 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60006 00:05:11.600 element at address: 0x200027a6c240 with size: 0.002441 MiB 00:05:11.600 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:11.600 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:11.600 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60006 00:05:11.600 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:11.600 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_60006 00:05:11.600 element at address: 0x20000085a840 with size: 0.000305 MiB 00:05:11.600 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60006 00:05:11.600 element at address: 0x200027a6cd00 with size: 0.000305 MiB 00:05:11.600 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:11.600 08:58:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:11.600 08:58:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60006 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 60006 ']' 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 60006 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60006 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60006' 00:05:11.600 killing process with pid 60006 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 60006 00:05:11.600 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 60006 00:05:11.859 00:05:11.859 real 0m1.390s 00:05:11.859 user 0m1.345s 00:05:11.859 sys 0m0.450s 00:05:11.859 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.859 ************************************ 00:05:11.859 END TEST dpdk_mem_utility 00:05:11.859 ************************************ 00:05:11.859 08:58:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.118 08:58:50 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.118 08:58:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.118 08:58:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.118 08:58:50 -- common/autotest_common.sh@10 -- # set +x 00:05:12.118 ************************************ 00:05:12.118 START TEST event 00:05:12.118 ************************************ 00:05:12.118 08:58:50 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:12.118 * Looking for test storage... 00:05:12.118 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:12.118 08:58:50 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.118 08:58:50 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.118 08:58:50 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.118 08:58:50 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.118 08:58:50 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.118 08:58:50 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.118 08:58:50 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.118 08:58:50 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.118 08:58:50 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.118 08:58:50 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.118 08:58:50 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.118 08:58:50 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.118 08:58:50 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.118 08:58:50 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.118 08:58:50 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.118 08:58:50 event -- scripts/common.sh@344 -- # case "$op" in 00:05:12.118 08:58:50 event -- scripts/common.sh@345 -- # : 1 00:05:12.118 08:58:50 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.118 08:58:50 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.118 08:58:50 event -- scripts/common.sh@365 -- # decimal 1 00:05:12.118 08:58:50 event -- scripts/common.sh@353 -- # local d=1 00:05:12.118 08:58:50 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.118 08:58:50 event -- scripts/common.sh@355 -- # echo 1 00:05:12.118 08:58:50 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.118 08:58:50 event -- scripts/common.sh@366 -- # decimal 2 00:05:12.118 08:58:50 event -- scripts/common.sh@353 -- # local d=2 00:05:12.118 08:58:50 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.118 08:58:50 event -- scripts/common.sh@355 -- # echo 2 00:05:12.118 08:58:50 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.119 08:58:50 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.119 08:58:50 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.119 08:58:50 event -- scripts/common.sh@368 -- # return 0 00:05:12.119 08:58:50 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.119 08:58:50 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.119 --rc genhtml_branch_coverage=1 00:05:12.119 --rc genhtml_function_coverage=1 00:05:12.119 --rc genhtml_legend=1 00:05:12.119 --rc geninfo_all_blocks=1 00:05:12.119 --rc geninfo_unexecuted_blocks=1 00:05:12.119 00:05:12.119 ' 00:05:12.119 08:58:50 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.119 --rc genhtml_branch_coverage=1 00:05:12.119 --rc genhtml_function_coverage=1 00:05:12.119 --rc genhtml_legend=1 00:05:12.119 --rc geninfo_all_blocks=1 00:05:12.119 --rc geninfo_unexecuted_blocks=1 00:05:12.119 00:05:12.119 ' 00:05:12.119 08:58:50 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.119 --rc genhtml_branch_coverage=1 00:05:12.119 --rc genhtml_function_coverage=1 00:05:12.119 --rc genhtml_legend=1 00:05:12.119 --rc geninfo_all_blocks=1 00:05:12.119 --rc geninfo_unexecuted_blocks=1 00:05:12.119 00:05:12.119 ' 00:05:12.119 08:58:50 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.119 --rc genhtml_branch_coverage=1 00:05:12.119 --rc genhtml_function_coverage=1 00:05:12.119 --rc genhtml_legend=1 00:05:12.119 --rc geninfo_all_blocks=1 00:05:12.119 --rc geninfo_unexecuted_blocks=1 00:05:12.119 00:05:12.119 ' 00:05:12.119 08:58:50 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:12.119 08:58:50 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:12.119 08:58:50 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.119 08:58:50 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:12.119 08:58:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.119 08:58:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.119 ************************************ 00:05:12.119 START TEST event_perf 00:05:12.119 ************************************ 00:05:12.119 08:58:50 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:12.119 Running I/O for 1 seconds...[2024-11-20 08:58:51.003725] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:12.119 [2024-11-20 08:58:51.003876] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60101 ] 00:05:12.379 [2024-11-20 08:58:51.152834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:12.379 [2024-11-20 08:58:51.221989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.379 Running I/O for 1 seconds...[2024-11-20 08:58:51.222174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.379 [2024-11-20 08:58:51.222279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.379 [2024-11-20 08:58:51.222284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.755 00:05:13.755 lcore 0: 188426 00:05:13.755 lcore 1: 188426 00:05:13.755 lcore 2: 188426 00:05:13.755 lcore 3: 188427 00:05:13.755 done. 00:05:13.755 00:05:13.755 real 0m1.295s 00:05:13.755 user 0m4.111s 00:05:13.755 sys 0m0.061s 00:05:13.755 08:58:52 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.755 08:58:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:13.755 ************************************ 00:05:13.755 END TEST event_perf 00:05:13.755 ************************************ 00:05:13.755 08:58:52 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.756 08:58:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:13.756 08:58:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.756 08:58:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.756 ************************************ 00:05:13.756 START TEST event_reactor 00:05:13.756 ************************************ 00:05:13.756 08:58:52 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:13.756 [2024-11-20 08:58:52.349580] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:13.756 [2024-11-20 08:58:52.349664] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60134 ] 00:05:13.756 [2024-11-20 08:58:52.490861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.756 [2024-11-20 08:58:52.557648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.713 test_start 00:05:14.713 oneshot 00:05:14.713 tick 100 00:05:14.713 tick 100 00:05:14.713 tick 250 00:05:14.713 tick 100 00:05:14.713 tick 100 00:05:14.713 tick 250 00:05:14.713 tick 100 00:05:14.713 tick 500 00:05:14.713 tick 100 00:05:14.713 tick 100 00:05:14.713 tick 250 00:05:14.713 tick 100 00:05:14.713 tick 100 00:05:14.713 test_end 00:05:14.713 00:05:14.713 real 0m1.282s 00:05:14.713 user 0m1.128s 00:05:14.713 sys 0m0.048s 00:05:14.713 08:58:53 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.713 08:58:53 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:14.713 ************************************ 00:05:14.713 END TEST event_reactor 00:05:14.713 ************************************ 00:05:14.971 08:58:53 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.971 08:58:53 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:14.971 08:58:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.971 08:58:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.971 ************************************ 00:05:14.971 START TEST event_reactor_perf 00:05:14.972 ************************************ 00:05:14.972 08:58:53 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:14.972 [2024-11-20 08:58:53.687034] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:14.972 [2024-11-20 08:58:53.687615] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:05:14.972 [2024-11-20 08:58:53.835342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.230 [2024-11-20 08:58:53.895052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.164 test_start 00:05:16.164 test_end 00:05:16.164 Performance: 377542 events per second 00:05:16.164 00:05:16.164 real 0m1.287s 00:05:16.164 user 0m1.132s 00:05:16.164 sys 0m0.048s 00:05:16.164 08:58:54 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.164 08:58:54 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.164 ************************************ 00:05:16.164 END TEST event_reactor_perf 00:05:16.164 ************************************ 00:05:16.164 08:58:54 event -- event/event.sh@49 -- # uname -s 00:05:16.164 08:58:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:16.164 08:58:55 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:16.164 08:58:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.164 08:58:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.164 08:58:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.164 ************************************ 00:05:16.164 START TEST event_scheduler 00:05:16.164 ************************************ 00:05:16.164 08:58:55 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:16.423 * Looking for test storage... 00:05:16.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.423 08:58:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:16.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.423 --rc genhtml_branch_coverage=1 00:05:16.423 --rc genhtml_function_coverage=1 00:05:16.423 --rc genhtml_legend=1 00:05:16.423 --rc geninfo_all_blocks=1 00:05:16.423 --rc geninfo_unexecuted_blocks=1 00:05:16.423 00:05:16.423 ' 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:16.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.423 --rc genhtml_branch_coverage=1 00:05:16.423 --rc genhtml_function_coverage=1 00:05:16.423 --rc genhtml_legend=1 00:05:16.423 --rc geninfo_all_blocks=1 00:05:16.423 --rc geninfo_unexecuted_blocks=1 00:05:16.423 00:05:16.423 ' 00:05:16.423 08:58:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:16.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.423 --rc genhtml_branch_coverage=1 00:05:16.423 --rc genhtml_function_coverage=1 00:05:16.423 --rc genhtml_legend=1 00:05:16.423 --rc geninfo_all_blocks=1 00:05:16.423 --rc geninfo_unexecuted_blocks=1 00:05:16.423 00:05:16.423 ' 00:05:16.424 08:58:55 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.424 --rc genhtml_branch_coverage=1 00:05:16.424 --rc genhtml_function_coverage=1 00:05:16.424 --rc genhtml_legend=1 00:05:16.424 --rc geninfo_all_blocks=1 00:05:16.424 --rc geninfo_unexecuted_blocks=1 00:05:16.424 00:05:16.424 ' 00:05:16.424 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:16.424 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60239 00:05:16.424 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:16.424 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.424 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60239 00:05:16.424 08:58:55 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60239 ']' 00:05:16.424 08:58:55 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.424 08:58:55 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.424 08:58:55 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.424 08:58:55 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.424 08:58:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.424 [2024-11-20 08:58:55.249511] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:16.424 [2024-11-20 08:58:55.249635] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60239 ] 00:05:16.683 [2024-11-20 08:58:55.402326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.683 [2024-11-20 08:58:55.477357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.683 [2024-11-20 08:58:55.477548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.683 [2024-11-20 08:58:55.477674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.683 [2024-11-20 08:58:55.477676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.683 08:58:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.683 08:58:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:16.683 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:16.683 08:58:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.683 08:58:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.683 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.683 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.683 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.683 POWER: Cannot set governor of lcore 0 to performance 00:05:16.683 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.683 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.683 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:16.683 POWER: Cannot set governor of lcore 0 to userspace 00:05:16.683 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:16.683 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:16.683 POWER: Unable to set Power Management Environment for lcore 0 00:05:16.684 [2024-11-20 08:58:55.530718] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:16.684 [2024-11-20 08:58:55.530734] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:16.684 [2024-11-20 08:58:55.530745] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:16.684 [2024-11-20 08:58:55.530774] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:16.684 [2024-11-20 08:58:55.530786] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:16.684 [2024-11-20 08:58:55.530795] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:16.684 08:58:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.684 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:16.684 08:58:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.684 08:58:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 [2024-11-20 08:58:55.633601] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:16.943 08:58:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.943 08:58:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:16.943 08:58:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.943 08:58:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.943 08:58:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 ************************************ 00:05:16.943 START TEST scheduler_create_thread 00:05:16.943 ************************************ 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 2 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 3 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 4 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 5 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 6 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.943 7 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.943 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 8 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 9 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 10 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.944 08:58:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.320 08:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.320 08:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:18.320 08:58:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:18.320 08:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.320 08:58:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.695 08:58:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.695 00:05:19.695 real 0m2.616s 00:05:19.695 user 0m0.019s 00:05:19.695 sys 0m0.005s 00:05:19.695 08:58:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.695 08:58:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.695 ************************************ 00:05:19.695 END TEST scheduler_create_thread 00:05:19.695 ************************************ 00:05:19.695 08:58:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:19.695 08:58:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60239 00:05:19.695 08:58:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60239 ']' 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60239 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60239 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:19.696 killing process with pid 60239 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60239' 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60239 00:05:19.696 08:58:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60239 00:05:19.954 [2024-11-20 08:58:58.739075] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:20.212 00:05:20.212 real 0m4.009s 00:05:20.212 user 0m5.921s 00:05:20.212 sys 0m0.342s 00:05:20.212 08:58:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.212 08:58:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.212 ************************************ 00:05:20.212 END TEST event_scheduler 00:05:20.212 ************************************ 00:05:20.212 08:58:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:20.212 08:58:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:20.212 08:58:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.212 08:58:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.212 08:58:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.212 ************************************ 00:05:20.212 START TEST app_repeat 00:05:20.212 ************************************ 00:05:20.212 08:58:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60343 00:05:20.212 08:58:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:20.213 08:58:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.213 Process app_repeat pid: 60343 00:05:20.213 08:58:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60343' 00:05:20.213 08:58:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.213 spdk_app_start Round 0 00:05:20.213 08:58:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:20.213 08:58:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60343 /var/tmp/spdk-nbd.sock 00:05:20.213 08:58:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:05:20.213 08:58:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.213 08:58:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.213 08:58:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.213 08:58:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.213 08:58:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.213 [2024-11-20 08:58:59.109938] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:20.213 [2024-11-20 08:58:59.110032] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ] 00:05:20.470 [2024-11-20 08:58:59.254423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.470 [2024-11-20 08:58:59.322458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.470 [2024-11-20 08:58:59.322469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.727 08:58:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.727 08:58:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.727 08:58:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.985 Malloc0 00:05:20.985 08:58:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.244 Malloc1 00:05:21.244 08:59:00 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.244 08:59:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:21.502 /dev/nbd0 00:05:21.502 08:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:21.502 08:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.502 1+0 records in 00:05:21.502 1+0 records out 00:05:21.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285136 s, 14.4 MB/s 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.502 08:59:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.502 08:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.502 08:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.502 08:59:00 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.070 /dev/nbd1 00:05:22.070 08:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.070 08:59:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.070 1+0 records in 00:05:22.070 1+0 records out 00:05:22.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292413 s, 14.0 MB/s 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.070 08:59:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.070 08:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.070 08:59:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.070 08:59:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.070 08:59:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.070 08:59:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.330 { 00:05:22.330 "bdev_name": "Malloc0", 00:05:22.330 "nbd_device": "/dev/nbd0" 00:05:22.330 }, 00:05:22.330 { 00:05:22.330 "bdev_name": "Malloc1", 00:05:22.330 "nbd_device": "/dev/nbd1" 00:05:22.330 } 00:05:22.330 ]' 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.330 { 00:05:22.330 "bdev_name": "Malloc0", 00:05:22.330 "nbd_device": "/dev/nbd0" 00:05:22.330 }, 00:05:22.330 { 00:05:22.330 "bdev_name": "Malloc1", 00:05:22.330 "nbd_device": "/dev/nbd1" 00:05:22.330 } 00:05:22.330 ]' 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.330 /dev/nbd1' 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.330 /dev/nbd1' 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.330 08:59:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.330 256+0 records in 00:05:22.331 256+0 records out 00:05:22.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00779487 s, 135 MB/s 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.331 256+0 records in 00:05:22.331 256+0 records out 00:05:22.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029954 s, 35.0 MB/s 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.331 256+0 records in 00:05:22.331 256+0 records out 00:05:22.331 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247833 s, 42.3 MB/s 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.331 08:59:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.624 08:59:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.882 08:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.882 08:59:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.882 08:59:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.882 08:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.882 08:59:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.882 08:59:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.138 08:59:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.138 08:59:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.138 08:59:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.138 08:59:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.138 08:59:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.396 08:59:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.396 08:59:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.660 08:59:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.932 [2024-11-20 08:59:02.664117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.932 [2024-11-20 08:59:02.733228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.932 [2024-11-20 08:59:02.733238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.932 [2024-11-20 08:59:02.788962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.932 [2024-11-20 08:59:02.789036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.214 08:59:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:27.214 spdk_app_start Round 1 00:05:27.214 08:59:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:27.214 08:59:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60343 /var/tmp/spdk-nbd.sock 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.214 08:59:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.214 08:59:05 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.473 Malloc0 00:05:27.473 08:59:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.732 Malloc1 00:05:27.732 08:59:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.732 08:59:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.990 /dev/nbd0 00:05:27.990 08:59:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.990 08:59:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.990 1+0 records in 00:05:27.990 1+0 records out 00:05:27.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236265 s, 17.3 MB/s 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:27.990 08:59:06 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:27.990 08:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.990 08:59:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.990 08:59:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.249 /dev/nbd1 00:05:28.249 08:59:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.249 08:59:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.249 1+0 records in 00:05:28.249 1+0 records out 00:05:28.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230067 s, 17.8 MB/s 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:28.249 08:59:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:28.249 08:59:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.249 08:59:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.249 08:59:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.249 08:59:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.249 08:59:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.817 { 00:05:28.817 "bdev_name": "Malloc0", 00:05:28.817 "nbd_device": "/dev/nbd0" 00:05:28.817 }, 00:05:28.817 { 00:05:28.817 "bdev_name": "Malloc1", 00:05:28.817 "nbd_device": "/dev/nbd1" 00:05:28.817 } 00:05:28.817 ]' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.817 { 00:05:28.817 "bdev_name": "Malloc0", 00:05:28.817 "nbd_device": "/dev/nbd0" 00:05:28.817 }, 00:05:28.817 { 00:05:28.817 "bdev_name": "Malloc1", 00:05:28.817 "nbd_device": "/dev/nbd1" 00:05:28.817 } 00:05:28.817 ]' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.817 /dev/nbd1' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.817 /dev/nbd1' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.817 256+0 records in 00:05:28.817 256+0 records out 00:05:28.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0078705 s, 133 MB/s 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.817 256+0 records in 00:05:28.817 256+0 records out 00:05:28.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206559 s, 50.8 MB/s 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.817 256+0 records in 00:05:28.817 256+0 records out 00:05:28.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245548 s, 42.7 MB/s 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.817 08:59:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.077 08:59:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.717 08:59:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.993 08:59:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.993 08:59:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.252 08:59:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.522 [2024-11-20 08:59:09.220876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.522 [2024-11-20 08:59:09.281380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.522 [2024-11-20 08:59:09.281392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.522 [2024-11-20 08:59:09.340925] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.522 [2024-11-20 08:59:09.341021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.807 spdk_app_start Round 2 00:05:33.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.807 08:59:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.807 08:59:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:33.807 08:59:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60343 /var/tmp/spdk-nbd.sock 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.807 08:59:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.807 08:59:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.807 Malloc0 00:05:33.807 08:59:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.374 Malloc1 00:05:34.374 08:59:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.374 08:59:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.632 /dev/nbd0 00:05:34.632 08:59:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.632 08:59:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.632 1+0 records in 00:05:34.632 1+0 records out 00:05:34.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284636 s, 14.4 MB/s 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.632 08:59:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.632 08:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.632 08:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.632 08:59:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.890 /dev/nbd1 00:05:34.890 08:59:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.890 08:59:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.890 08:59:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.891 1+0 records in 00:05:34.891 1+0 records out 00:05:34.891 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283264 s, 14.5 MB/s 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:34.891 08:59:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:34.891 08:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.891 08:59:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.891 08:59:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.891 08:59:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.891 08:59:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.457 { 00:05:35.457 "bdev_name": "Malloc0", 00:05:35.457 "nbd_device": "/dev/nbd0" 00:05:35.457 }, 00:05:35.457 { 00:05:35.457 "bdev_name": "Malloc1", 00:05:35.457 "nbd_device": "/dev/nbd1" 00:05:35.457 } 00:05:35.457 ]' 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.457 { 00:05:35.457 "bdev_name": "Malloc0", 00:05:35.457 "nbd_device": "/dev/nbd0" 00:05:35.457 }, 00:05:35.457 { 00:05:35.457 "bdev_name": "Malloc1", 00:05:35.457 "nbd_device": "/dev/nbd1" 00:05:35.457 } 00:05:35.457 ]' 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.457 /dev/nbd1' 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.457 /dev/nbd1' 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.457 256+0 records in 00:05:35.457 256+0 records out 00:05:35.457 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532811 s, 197 MB/s 00:05:35.457 08:59:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.458 256+0 records in 00:05:35.458 256+0 records out 00:05:35.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256987 s, 40.8 MB/s 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.458 256+0 records in 00:05:35.458 256+0 records out 00:05:35.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301775 s, 34.7 MB/s 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.458 08:59:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.716 08:59:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.283 08:59:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.540 08:59:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.540 08:59:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.853 08:59:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.126 [2024-11-20 08:59:15.807413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.126 [2024-11-20 08:59:15.858930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.126 [2024-11-20 08:59:15.858942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.126 [2024-11-20 08:59:15.914072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.126 [2024-11-20 08:59:15.914136] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.417 08:59:18 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60343 /var/tmp/spdk-nbd.sock 00:05:40.417 08:59:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:05:40.417 08:59:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.417 08:59:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.417 08:59:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.417 08:59:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.417 08:59:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.417 08:59:19 event.app_repeat -- event/event.sh@39 -- # killprocess 60343 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60343 ']' 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60343 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60343 00:05:40.417 killing process with pid 60343 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60343' 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60343 00:05:40.417 08:59:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60343 00:05:40.417 spdk_app_start is called in Round 0. 00:05:40.417 Shutdown signal received, stop current app iteration 00:05:40.417 Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 reinitialization... 00:05:40.417 spdk_app_start is called in Round 1. 00:05:40.417 Shutdown signal received, stop current app iteration 00:05:40.418 Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 reinitialization... 00:05:40.418 spdk_app_start is called in Round 2. 00:05:40.418 Shutdown signal received, stop current app iteration 00:05:40.418 Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 reinitialization... 00:05:40.418 spdk_app_start is called in Round 3. 00:05:40.418 Shutdown signal received, stop current app iteration 00:05:40.418 08:59:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:40.418 08:59:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:40.418 00:05:40.418 real 0m20.133s 00:05:40.418 user 0m46.275s 00:05:40.418 sys 0m3.284s 00:05:40.418 08:59:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.418 08:59:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.418 ************************************ 00:05:40.418 END TEST app_repeat 00:05:40.418 ************************************ 00:05:40.418 08:59:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:40.418 08:59:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:40.418 08:59:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.418 08:59:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.418 08:59:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.418 ************************************ 00:05:40.418 START TEST cpu_locks 00:05:40.418 ************************************ 00:05:40.418 08:59:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:40.678 * Looking for test storage... 00:05:40.678 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.678 08:59:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:40.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.678 --rc genhtml_branch_coverage=1 00:05:40.678 --rc genhtml_function_coverage=1 00:05:40.678 --rc genhtml_legend=1 00:05:40.678 --rc geninfo_all_blocks=1 00:05:40.678 --rc geninfo_unexecuted_blocks=1 00:05:40.678 00:05:40.678 ' 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:40.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.678 --rc genhtml_branch_coverage=1 00:05:40.678 --rc genhtml_function_coverage=1 00:05:40.678 --rc genhtml_legend=1 00:05:40.678 --rc geninfo_all_blocks=1 00:05:40.678 --rc geninfo_unexecuted_blocks=1 00:05:40.678 00:05:40.678 ' 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:40.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.678 --rc genhtml_branch_coverage=1 00:05:40.678 --rc genhtml_function_coverage=1 00:05:40.678 --rc genhtml_legend=1 00:05:40.678 --rc geninfo_all_blocks=1 00:05:40.678 --rc geninfo_unexecuted_blocks=1 00:05:40.678 00:05:40.678 ' 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:40.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.678 --rc genhtml_branch_coverage=1 00:05:40.678 --rc genhtml_function_coverage=1 00:05:40.678 --rc genhtml_legend=1 00:05:40.678 --rc geninfo_all_blocks=1 00:05:40.678 --rc geninfo_unexecuted_blocks=1 00:05:40.678 00:05:40.678 ' 00:05:40.678 08:59:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:40.678 08:59:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:40.678 08:59:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:40.678 08:59:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.678 08:59:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.678 ************************************ 00:05:40.678 START TEST default_locks 00:05:40.678 ************************************ 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60979 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60979 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60979 ']' 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.678 08:59:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.679 08:59:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.679 08:59:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:40.679 [2024-11-20 08:59:19.556065] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:40.679 [2024-11-20 08:59:19.556186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:05:40.937 [2024-11-20 08:59:19.704386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.937 [2024-11-20 08:59:19.771129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.873 08:59:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.873 08:59:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:41.873 08:59:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60979 00:05:41.873 08:59:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60979 00:05:41.873 08:59:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.130 08:59:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60979 00:05:42.130 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60979 ']' 00:05:42.130 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60979 00:05:42.130 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:42.130 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.130 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60979 00:05:42.388 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.388 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.388 killing process with pid 60979 00:05:42.388 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60979' 00:05:42.388 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60979 00:05:42.388 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60979 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60979 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60979 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60979 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60979 ']' 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.647 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60979) - No such process 00:05:42.647 ERROR: process (pid: 60979) is no longer running 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:42.647 00:05:42.647 real 0m1.961s 00:05:42.647 user 0m2.185s 00:05:42.647 sys 0m0.590s 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.647 08:59:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.647 ************************************ 00:05:42.647 END TEST default_locks 00:05:42.647 ************************************ 00:05:42.647 08:59:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:42.647 08:59:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.647 08:59:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.647 08:59:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.647 ************************************ 00:05:42.647 START TEST default_locks_via_rpc 00:05:42.647 ************************************ 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=61037 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 61037 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61037 ']' 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.647 08:59:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.906 [2024-11-20 08:59:21.586895] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:42.906 [2024-11-20 08:59:21.587514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61037 ] 00:05:42.906 [2024-11-20 08:59:21.740396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.906 [2024-11-20 08:59:21.814222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.473 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.473 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 61037 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 61037 00:05:43.474 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.732 08:59:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 61037 00:05:43.732 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 61037 ']' 00:05:43.732 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 61037 00:05:43.732 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:43.732 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.732 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61037 00:05:43.991 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.991 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.991 killing process with pid 61037 00:05:43.991 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61037' 00:05:43.991 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 61037 00:05:43.991 08:59:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 61037 00:05:44.251 00:05:44.251 real 0m1.564s 00:05:44.251 user 0m1.525s 00:05:44.251 sys 0m0.615s 00:05:44.251 08:59:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.251 ************************************ 00:05:44.251 END TEST default_locks_via_rpc 00:05:44.251 ************************************ 00:05:44.251 08:59:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.251 08:59:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:44.251 08:59:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.251 08:59:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.251 08:59:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.251 ************************************ 00:05:44.251 START TEST non_locking_app_on_locked_coremask 00:05:44.251 ************************************ 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61098 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61098 /var/tmp/spdk.sock 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61098 ']' 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.251 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.514 [2024-11-20 08:59:23.170339] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:44.514 [2024-11-20 08:59:23.170451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61098 ] 00:05:44.514 [2024-11-20 08:59:23.319370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.514 [2024-11-20 08:59:23.392192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61113 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61113 /var/tmp/spdk2.sock 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61113 ']' 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.079 08:59:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.079 [2024-11-20 08:59:23.774991] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:45.079 [2024-11-20 08:59:23.775105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61113 ] 00:05:45.079 [2024-11-20 08:59:23.940384] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.079 [2024-11-20 08:59:23.940455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.337 [2024-11-20 08:59:24.078212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.272 08:59:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.272 08:59:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.272 08:59:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61098 00:05:46.272 08:59:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.272 08:59:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61098 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61098 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61098 ']' 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61098 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61098 00:05:46.838 killing process with pid 61098 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61098' 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61098 00:05:46.838 08:59:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61098 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61113 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61113 ']' 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61113 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61113 00:05:47.772 killing process with pid 61113 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61113' 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61113 00:05:47.772 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61113 00:05:48.030 00:05:48.030 real 0m3.832s 00:05:48.030 user 0m4.247s 00:05:48.030 sys 0m1.111s 00:05:48.030 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.030 08:59:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.030 ************************************ 00:05:48.030 END TEST non_locking_app_on_locked_coremask 00:05:48.030 ************************************ 00:05:48.289 08:59:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:48.289 08:59:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.289 08:59:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.289 08:59:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.289 ************************************ 00:05:48.289 START TEST locking_app_on_unlocked_coremask 00:05:48.289 ************************************ 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61193 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61193 /var/tmp/spdk.sock 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61193 ']' 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:48.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.289 08:59:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.289 [2024-11-20 08:59:27.072177] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:48.289 [2024-11-20 08:59:27.072316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61193 ] 00:05:48.548 [2024-11-20 08:59:27.222284] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.548 [2024-11-20 08:59:27.222536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.548 [2024-11-20 08:59:27.286469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61221 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61221 /var/tmp/spdk2.sock 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61221 ']' 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.502 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.503 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.503 08:59:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.503 [2024-11-20 08:59:28.191406] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:49.503 [2024-11-20 08:59:28.191523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61221 ] 00:05:49.503 [2024-11-20 08:59:28.355506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.761 [2024-11-20 08:59:28.494458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.329 08:59:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.329 08:59:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:50.329 08:59:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61221 00:05:50.329 08:59:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61221 00:05:50.329 08:59:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61193 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61193 ']' 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61193 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61193 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.267 killing process with pid 61193 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61193' 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61193 00:05:51.267 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61193 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61221 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61221 ']' 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61221 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61221 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.203 killing process with pid 61221 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61221' 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61221 00:05:52.203 08:59:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61221 00:05:52.463 00:05:52.463 real 0m4.375s 00:05:52.463 user 0m4.955s 00:05:52.463 sys 0m1.243s 00:05:52.463 08:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.463 08:59:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.463 ************************************ 00:05:52.463 END TEST locking_app_on_unlocked_coremask 00:05:52.463 ************************************ 00:05:52.722 08:59:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.722 08:59:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.722 08:59:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.722 08:59:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.722 ************************************ 00:05:52.722 START TEST locking_app_on_locked_coremask 00:05:52.722 ************************************ 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61305 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61305 /var/tmp/spdk.sock 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61305 ']' 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.722 08:59:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.722 [2024-11-20 08:59:31.499813] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:52.722 [2024-11-20 08:59:31.499934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61305 ] 00:05:52.980 [2024-11-20 08:59:31.647983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.980 [2024-11-20 08:59:31.712968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61334 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61334 /var/tmp/spdk2.sock 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61334 /var/tmp/spdk2.sock 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61334 /var/tmp/spdk2.sock 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61334 ']' 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.917 08:59:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.917 [2024-11-20 08:59:32.672662] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:53.917 [2024-11-20 08:59:32.672774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61334 ] 00:05:54.176 [2024-11-20 08:59:32.836126] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61305 has claimed it. 00:05:54.176 [2024-11-20 08:59:32.836206] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.742 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61334) - No such process 00:05:54.742 ERROR: process (pid: 61334) is no longer running 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61305 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61305 00:05:54.742 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61305 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61305 ']' 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61305 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61305 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.001 killing process with pid 61305 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61305' 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61305 00:05:55.001 08:59:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61305 00:05:55.569 00:05:55.569 real 0m2.882s 00:05:55.569 user 0m3.453s 00:05:55.569 sys 0m0.740s 00:05:55.569 08:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.569 08:59:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.569 ************************************ 00:05:55.569 END TEST locking_app_on_locked_coremask 00:05:55.569 ************************************ 00:05:55.569 08:59:34 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.569 08:59:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.569 08:59:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.569 08:59:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.569 ************************************ 00:05:55.569 START TEST locking_overlapped_coremask 00:05:55.569 ************************************ 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61386 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61386 /var/tmp/spdk.sock 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61386 ']' 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.569 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.570 [2024-11-20 08:59:34.433041] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:55.570 [2024-11-20 08:59:34.433164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61386 ] 00:05:55.828 [2024-11-20 08:59:34.582688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.828 [2024-11-20 08:59:34.650182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.828 [2024-11-20 08:59:34.650350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.828 [2024-11-20 08:59:34.650353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.086 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.086 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61408 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61408 /var/tmp/spdk2.sock 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61408 /var/tmp/spdk2.sock 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61408 /var/tmp/spdk2.sock 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61408 ']' 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.087 08:59:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.345 [2024-11-20 08:59:35.005167] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:56.345 [2024-11-20 08:59:35.005274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61408 ] 00:05:56.345 [2024-11-20 08:59:35.170118] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61386 has claimed it. 00:05:56.345 [2024-11-20 08:59:35.174925] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.912 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61408) - No such process 00:05:56.912 ERROR: process (pid: 61408) is no longer running 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61386 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61386 ']' 00:05:56.912 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61386 00:05:56.913 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.913 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.913 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61386 00:05:57.171 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.171 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.171 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61386' 00:05:57.171 killing process with pid 61386 00:05:57.171 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61386 00:05:57.171 08:59:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61386 00:05:57.739 00:05:57.739 real 0m2.079s 00:05:57.739 user 0m5.709s 00:05:57.739 sys 0m0.453s 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.739 ************************************ 00:05:57.739 END TEST locking_overlapped_coremask 00:05:57.739 ************************************ 00:05:57.739 08:59:36 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:57.739 08:59:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.739 08:59:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.739 08:59:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.739 ************************************ 00:05:57.739 START TEST locking_overlapped_coremask_via_rpc 00:05:57.739 ************************************ 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61454 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61454 /var/tmp/spdk.sock 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61454 ']' 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.739 08:59:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.739 [2024-11-20 08:59:36.565190] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:57.739 [2024-11-20 08:59:36.565376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61454 ] 00:05:57.997 [2024-11-20 08:59:36.724778] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.997 [2024-11-20 08:59:36.724896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.997 [2024-11-20 08:59:36.822572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.997 [2024-11-20 08:59:36.822693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.997 [2024-11-20 08:59:36.822699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61488 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61488 /var/tmp/spdk2.sock 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61488 ']' 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.961 08:59:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.961 [2024-11-20 08:59:37.760301] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:58.961 [2024-11-20 08:59:37.760423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61488 ] 00:05:59.250 [2024-11-20 08:59:37.928192] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.250 [2024-11-20 08:59:37.928256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.250 [2024-11-20 08:59:38.104716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.250 [2024-11-20 08:59:38.107863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.250 [2024-11-20 08:59:38.107863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.185 [2024-11-20 08:59:38.884934] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61454 has claimed it. 00:06:00.185 2024/11/20 08:59:38 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:00.185 request: 00:06:00.185 { 00:06:00.185 "method": "framework_enable_cpumask_locks", 00:06:00.185 "params": {} 00:06:00.185 } 00:06:00.185 Got JSON-RPC error response 00:06:00.185 GoRPCClient: error on JSON-RPC call 00:06:00.185 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61454 /var/tmp/spdk.sock 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61454 ']' 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.186 08:59:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61488 /var/tmp/spdk2.sock 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61488 ']' 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.445 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.703 ************************************ 00:06:00.703 END TEST locking_overlapped_coremask_via_rpc 00:06:00.703 ************************************ 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:00.703 00:06:00.703 real 0m3.079s 00:06:00.703 user 0m1.741s 00:06:00.703 sys 0m0.262s 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.703 08:59:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.703 08:59:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:00.703 08:59:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61454 ]] 00:06:00.703 08:59:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61454 00:06:00.703 08:59:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61454 ']' 00:06:00.703 08:59:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61454 00:06:00.703 08:59:39 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:00.703 08:59:39 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.703 08:59:39 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61454 00:06:00.961 killing process with pid 61454 00:06:00.961 08:59:39 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.961 08:59:39 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.961 08:59:39 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61454' 00:06:00.961 08:59:39 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61454 00:06:00.961 08:59:39 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61454 00:06:01.528 08:59:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61488 ]] 00:06:01.528 08:59:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61488 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61488 ']' 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61488 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61488 00:06:01.528 killing process with pid 61488 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61488' 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61488 00:06:01.528 08:59:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61488 00:06:01.788 08:59:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.788 08:59:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:01.788 08:59:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61454 ]] 00:06:01.788 08:59:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61454 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61454 ']' 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61454 00:06:01.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61454) - No such process 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61454 is not found' 00:06:01.788 Process with pid 61454 is not found 00:06:01.788 08:59:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61488 ]] 00:06:01.788 08:59:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61488 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61488 ']' 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61488 00:06:01.788 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61488) - No such process 00:06:01.788 Process with pid 61488 is not found 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61488 is not found' 00:06:01.788 08:59:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:01.788 00:06:01.788 real 0m21.423s 00:06:01.788 user 0m39.163s 00:06:01.788 sys 0m6.070s 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.788 08:59:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.788 ************************************ 00:06:01.788 END TEST cpu_locks 00:06:01.788 ************************************ 00:06:02.047 00:06:02.047 real 0m49.927s 00:06:02.047 user 1m37.941s 00:06:02.047 sys 0m10.122s 00:06:02.047 08:59:40 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.047 08:59:40 event -- common/autotest_common.sh@10 -- # set +x 00:06:02.047 ************************************ 00:06:02.047 END TEST event 00:06:02.047 ************************************ 00:06:02.047 08:59:40 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.047 08:59:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.047 08:59:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.047 08:59:40 -- common/autotest_common.sh@10 -- # set +x 00:06:02.047 ************************************ 00:06:02.047 START TEST thread 00:06:02.047 ************************************ 00:06:02.047 08:59:40 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:02.047 * Looking for test storage... 00:06:02.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:02.047 08:59:40 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.047 08:59:40 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.047 08:59:40 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.047 08:59:40 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.047 08:59:40 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.047 08:59:40 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.047 08:59:40 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.047 08:59:40 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.047 08:59:40 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.047 08:59:40 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.047 08:59:40 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.047 08:59:40 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.047 08:59:40 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.047 08:59:40 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.047 08:59:40 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.047 08:59:40 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:02.047 08:59:40 thread -- scripts/common.sh@345 -- # : 1 00:06:02.047 08:59:40 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.047 08:59:40 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.047 08:59:40 thread -- scripts/common.sh@365 -- # decimal 1 00:06:02.047 08:59:40 thread -- scripts/common.sh@353 -- # local d=1 00:06:02.047 08:59:40 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.047 08:59:40 thread -- scripts/common.sh@355 -- # echo 1 00:06:02.047 08:59:40 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.047 08:59:40 thread -- scripts/common.sh@366 -- # decimal 2 00:06:02.047 08:59:40 thread -- scripts/common.sh@353 -- # local d=2 00:06:02.047 08:59:40 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.047 08:59:40 thread -- scripts/common.sh@355 -- # echo 2 00:06:02.047 08:59:40 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.047 08:59:40 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.047 08:59:40 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.047 08:59:40 thread -- scripts/common.sh@368 -- # return 0 00:06:02.047 08:59:40 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.047 08:59:40 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.047 --rc genhtml_branch_coverage=1 00:06:02.047 --rc genhtml_function_coverage=1 00:06:02.048 --rc genhtml_legend=1 00:06:02.048 --rc geninfo_all_blocks=1 00:06:02.048 --rc geninfo_unexecuted_blocks=1 00:06:02.048 00:06:02.048 ' 00:06:02.048 08:59:40 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.048 --rc genhtml_branch_coverage=1 00:06:02.048 --rc genhtml_function_coverage=1 00:06:02.048 --rc genhtml_legend=1 00:06:02.048 --rc geninfo_all_blocks=1 00:06:02.048 --rc geninfo_unexecuted_blocks=1 00:06:02.048 00:06:02.048 ' 00:06:02.048 08:59:40 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.048 --rc genhtml_branch_coverage=1 00:06:02.048 --rc genhtml_function_coverage=1 00:06:02.048 --rc genhtml_legend=1 00:06:02.048 --rc geninfo_all_blocks=1 00:06:02.048 --rc geninfo_unexecuted_blocks=1 00:06:02.048 00:06:02.048 ' 00:06:02.048 08:59:40 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.048 --rc genhtml_branch_coverage=1 00:06:02.048 --rc genhtml_function_coverage=1 00:06:02.048 --rc genhtml_legend=1 00:06:02.048 --rc geninfo_all_blocks=1 00:06:02.048 --rc geninfo_unexecuted_blocks=1 00:06:02.048 00:06:02.048 ' 00:06:02.048 08:59:40 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.048 08:59:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.048 08:59:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.048 08:59:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.305 ************************************ 00:06:02.305 START TEST thread_poller_perf 00:06:02.305 ************************************ 00:06:02.306 08:59:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:02.306 [2024-11-20 08:59:40.991705] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:02.306 [2024-11-20 08:59:40.991842] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61655 ] 00:06:02.306 [2024-11-20 08:59:41.142924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.565 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.565 [2024-11-20 08:59:41.230455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.501 [2024-11-20T08:59:42.420Z] ====================================== 00:06:03.501 [2024-11-20T08:59:42.420Z] busy:2210371144 (cyc) 00:06:03.501 [2024-11-20T08:59:42.420Z] total_run_count: 308000 00:06:03.501 [2024-11-20T08:59:42.420Z] tsc_hz: 2200000000 (cyc) 00:06:03.501 [2024-11-20T08:59:42.420Z] ====================================== 00:06:03.501 [2024-11-20T08:59:42.420Z] poller_cost: 7176 (cyc), 3261 (nsec) 00:06:03.501 00:06:03.501 real 0m1.331s 00:06:03.501 user 0m1.166s 00:06:03.501 sys 0m0.057s 00:06:03.501 08:59:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.501 08:59:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.501 ************************************ 00:06:03.501 END TEST thread_poller_perf 00:06:03.501 ************************************ 00:06:03.501 08:59:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.501 08:59:42 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:03.501 08:59:42 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.501 08:59:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.501 ************************************ 00:06:03.501 START TEST thread_poller_perf 00:06:03.501 ************************************ 00:06:03.501 08:59:42 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:03.501 [2024-11-20 08:59:42.371315] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:03.501 [2024-11-20 08:59:42.371415] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61685 ] 00:06:03.760 [2024-11-20 08:59:42.520426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.760 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.760 [2024-11-20 08:59:42.615242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.135 [2024-11-20T08:59:44.054Z] ====================================== 00:06:05.135 [2024-11-20T08:59:44.054Z] busy:2202302312 (cyc) 00:06:05.135 [2024-11-20T08:59:44.054Z] total_run_count: 3804000 00:06:05.135 [2024-11-20T08:59:44.054Z] tsc_hz: 2200000000 (cyc) 00:06:05.135 [2024-11-20T08:59:44.054Z] ====================================== 00:06:05.135 [2024-11-20T08:59:44.054Z] poller_cost: 578 (cyc), 262 (nsec) 00:06:05.135 00:06:05.135 real 0m1.317s 00:06:05.135 user 0m1.153s 00:06:05.135 sys 0m0.055s 00:06:05.135 08:59:43 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.135 08:59:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.135 ************************************ 00:06:05.135 END TEST thread_poller_perf 00:06:05.135 ************************************ 00:06:05.135 08:59:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:05.135 00:06:05.135 real 0m2.956s 00:06:05.135 user 0m2.470s 00:06:05.135 sys 0m0.270s 00:06:05.135 08:59:43 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.135 ************************************ 00:06:05.135 END TEST thread 00:06:05.135 ************************************ 00:06:05.135 08:59:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.135 08:59:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:05.135 08:59:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.135 08:59:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.135 08:59:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.135 08:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:05.135 ************************************ 00:06:05.135 START TEST app_cmdline 00:06:05.135 ************************************ 00:06:05.135 08:59:43 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:05.135 * Looking for test storage... 00:06:05.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:05.135 08:59:43 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.135 08:59:43 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.135 08:59:43 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.135 08:59:43 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.135 08:59:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:05.135 08:59:43 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.136 --rc genhtml_branch_coverage=1 00:06:05.136 --rc genhtml_function_coverage=1 00:06:05.136 --rc genhtml_legend=1 00:06:05.136 --rc geninfo_all_blocks=1 00:06:05.136 --rc geninfo_unexecuted_blocks=1 00:06:05.136 00:06:05.136 ' 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.136 --rc genhtml_branch_coverage=1 00:06:05.136 --rc genhtml_function_coverage=1 00:06:05.136 --rc genhtml_legend=1 00:06:05.136 --rc geninfo_all_blocks=1 00:06:05.136 --rc geninfo_unexecuted_blocks=1 00:06:05.136 00:06:05.136 ' 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.136 --rc genhtml_branch_coverage=1 00:06:05.136 --rc genhtml_function_coverage=1 00:06:05.136 --rc genhtml_legend=1 00:06:05.136 --rc geninfo_all_blocks=1 00:06:05.136 --rc geninfo_unexecuted_blocks=1 00:06:05.136 00:06:05.136 ' 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.136 --rc genhtml_branch_coverage=1 00:06:05.136 --rc genhtml_function_coverage=1 00:06:05.136 --rc genhtml_legend=1 00:06:05.136 --rc geninfo_all_blocks=1 00:06:05.136 --rc geninfo_unexecuted_blocks=1 00:06:05.136 00:06:05.136 ' 00:06:05.136 08:59:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:05.136 08:59:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61767 00:06:05.136 08:59:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61767 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61767 ']' 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.136 08:59:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:05.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.136 08:59:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:05.136 [2024-11-20 08:59:44.048959] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:05.136 [2024-11-20 08:59:44.049058] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61767 ] 00:06:05.468 [2024-11-20 08:59:44.200949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.468 [2024-11-20 08:59:44.272170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.728 08:59:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.728 08:59:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:05.728 08:59:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:06.297 { 00:06:06.297 "fields": { 00:06:06.297 "commit": "4f0cbdcd1", 00:06:06.297 "major": 25, 00:06:06.297 "minor": 1, 00:06:06.297 "patch": 0, 00:06:06.297 "suffix": "-pre" 00:06:06.297 }, 00:06:06.297 "version": "SPDK v25.01-pre git sha1 4f0cbdcd1" 00:06:06.297 } 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:06.297 08:59:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:06.297 08:59:44 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:06.555 2024/11/20 08:59:45 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:06.555 request: 00:06:06.555 { 00:06:06.555 "method": "env_dpdk_get_mem_stats", 00:06:06.555 "params": {} 00:06:06.555 } 00:06:06.555 Got JSON-RPC error response 00:06:06.555 GoRPCClient: error on JSON-RPC call 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.555 08:59:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61767 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61767 ']' 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61767 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61767 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.555 killing process with pid 61767 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61767' 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@973 -- # kill 61767 00:06:06.555 08:59:45 app_cmdline -- common/autotest_common.sh@978 -- # wait 61767 00:06:07.125 00:06:07.125 real 0m2.004s 00:06:07.125 user 0m2.495s 00:06:07.125 sys 0m0.529s 00:06:07.125 08:59:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.125 08:59:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.125 ************************************ 00:06:07.125 END TEST app_cmdline 00:06:07.125 ************************************ 00:06:07.125 08:59:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.125 08:59:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.125 08:59:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.125 08:59:45 -- common/autotest_common.sh@10 -- # set +x 00:06:07.125 ************************************ 00:06:07.125 START TEST version 00:06:07.125 ************************************ 00:06:07.125 08:59:45 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:07.125 * Looking for test storage... 00:06:07.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:07.125 08:59:45 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.125 08:59:45 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.125 08:59:45 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.125 08:59:45 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.125 08:59:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.125 08:59:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.125 08:59:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.125 08:59:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.125 08:59:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.125 08:59:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.125 08:59:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.125 08:59:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.125 08:59:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.125 08:59:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.125 08:59:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.125 08:59:45 version -- scripts/common.sh@344 -- # case "$op" in 00:06:07.125 08:59:45 version -- scripts/common.sh@345 -- # : 1 00:06:07.125 08:59:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.125 08:59:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.125 08:59:45 version -- scripts/common.sh@365 -- # decimal 1 00:06:07.125 08:59:45 version -- scripts/common.sh@353 -- # local d=1 00:06:07.125 08:59:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.125 08:59:45 version -- scripts/common.sh@355 -- # echo 1 00:06:07.125 08:59:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.125 08:59:46 version -- scripts/common.sh@366 -- # decimal 2 00:06:07.125 08:59:46 version -- scripts/common.sh@353 -- # local d=2 00:06:07.125 08:59:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.125 08:59:46 version -- scripts/common.sh@355 -- # echo 2 00:06:07.125 08:59:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.125 08:59:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.125 08:59:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.125 08:59:46 version -- scripts/common.sh@368 -- # return 0 00:06:07.125 08:59:46 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.125 08:59:46 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.125 --rc genhtml_branch_coverage=1 00:06:07.125 --rc genhtml_function_coverage=1 00:06:07.125 --rc genhtml_legend=1 00:06:07.125 --rc geninfo_all_blocks=1 00:06:07.125 --rc geninfo_unexecuted_blocks=1 00:06:07.125 00:06:07.125 ' 00:06:07.125 08:59:46 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.125 --rc genhtml_branch_coverage=1 00:06:07.125 --rc genhtml_function_coverage=1 00:06:07.125 --rc genhtml_legend=1 00:06:07.125 --rc geninfo_all_blocks=1 00:06:07.125 --rc geninfo_unexecuted_blocks=1 00:06:07.125 00:06:07.125 ' 00:06:07.125 08:59:46 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.125 --rc genhtml_branch_coverage=1 00:06:07.125 --rc genhtml_function_coverage=1 00:06:07.125 --rc genhtml_legend=1 00:06:07.125 --rc geninfo_all_blocks=1 00:06:07.126 --rc geninfo_unexecuted_blocks=1 00:06:07.126 00:06:07.126 ' 00:06:07.126 08:59:46 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.126 --rc genhtml_branch_coverage=1 00:06:07.126 --rc genhtml_function_coverage=1 00:06:07.126 --rc genhtml_legend=1 00:06:07.126 --rc geninfo_all_blocks=1 00:06:07.126 --rc geninfo_unexecuted_blocks=1 00:06:07.126 00:06:07.126 ' 00:06:07.126 08:59:46 version -- app/version.sh@17 -- # get_header_version major 00:06:07.126 08:59:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # cut -f2 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.126 08:59:46 version -- app/version.sh@17 -- # major=25 00:06:07.126 08:59:46 version -- app/version.sh@18 -- # get_header_version minor 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # cut -f2 00:06:07.126 08:59:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.126 08:59:46 version -- app/version.sh@18 -- # minor=1 00:06:07.126 08:59:46 version -- app/version.sh@19 -- # get_header_version patch 00:06:07.126 08:59:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # cut -f2 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.126 08:59:46 version -- app/version.sh@19 -- # patch=0 00:06:07.126 08:59:46 version -- app/version.sh@20 -- # get_header_version suffix 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # cut -f2 00:06:07.126 08:59:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:07.126 08:59:46 version -- app/version.sh@14 -- # tr -d '"' 00:06:07.126 08:59:46 version -- app/version.sh@20 -- # suffix=-pre 00:06:07.126 08:59:46 version -- app/version.sh@22 -- # version=25.1 00:06:07.126 08:59:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:07.126 08:59:46 version -- app/version.sh@28 -- # version=25.1rc0 00:06:07.126 08:59:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:07.126 08:59:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:07.385 08:59:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:07.385 08:59:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:07.385 00:06:07.385 real 0m0.268s 00:06:07.385 user 0m0.171s 00:06:07.385 sys 0m0.133s 00:06:07.385 08:59:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.385 08:59:46 version -- common/autotest_common.sh@10 -- # set +x 00:06:07.385 ************************************ 00:06:07.385 END TEST version 00:06:07.385 ************************************ 00:06:07.385 08:59:46 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:07.385 08:59:46 -- spdk/autotest.sh@194 -- # uname -s 00:06:07.385 08:59:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:07.385 08:59:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:07.385 08:59:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:07.385 08:59:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:07.385 08:59:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.385 08:59:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.385 08:59:46 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:07.385 08:59:46 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:07.385 08:59:46 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:07.385 08:59:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.385 08:59:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.385 08:59:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.385 ************************************ 00:06:07.385 START TEST nvmf_tcp 00:06:07.385 ************************************ 00:06:07.385 08:59:46 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:07.385 * Looking for test storage... 00:06:07.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:07.385 08:59:46 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.385 08:59:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.385 08:59:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.644 08:59:46 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.644 08:59:46 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.645 08:59:46 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.645 --rc genhtml_branch_coverage=1 00:06:07.645 --rc genhtml_function_coverage=1 00:06:07.645 --rc genhtml_legend=1 00:06:07.645 --rc geninfo_all_blocks=1 00:06:07.645 --rc geninfo_unexecuted_blocks=1 00:06:07.645 00:06:07.645 ' 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.645 --rc genhtml_branch_coverage=1 00:06:07.645 --rc genhtml_function_coverage=1 00:06:07.645 --rc genhtml_legend=1 00:06:07.645 --rc geninfo_all_blocks=1 00:06:07.645 --rc geninfo_unexecuted_blocks=1 00:06:07.645 00:06:07.645 ' 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.645 --rc genhtml_branch_coverage=1 00:06:07.645 --rc genhtml_function_coverage=1 00:06:07.645 --rc genhtml_legend=1 00:06:07.645 --rc geninfo_all_blocks=1 00:06:07.645 --rc geninfo_unexecuted_blocks=1 00:06:07.645 00:06:07.645 ' 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.645 --rc genhtml_branch_coverage=1 00:06:07.645 --rc genhtml_function_coverage=1 00:06:07.645 --rc genhtml_legend=1 00:06:07.645 --rc geninfo_all_blocks=1 00:06:07.645 --rc geninfo_unexecuted_blocks=1 00:06:07.645 00:06:07.645 ' 00:06:07.645 08:59:46 nvmf_tcp -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.645 08:59:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.645 ************************************ 00:06:07.645 START TEST nvmf_target_core 00:06:07.645 ************************************ 00:06:07.645 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:07.645 * Looking for test storage... 00:06:07.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:07.645 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.645 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.645 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.905 --rc genhtml_branch_coverage=1 00:06:07.905 --rc genhtml_function_coverage=1 00:06:07.905 --rc genhtml_legend=1 00:06:07.905 --rc geninfo_all_blocks=1 00:06:07.905 --rc geninfo_unexecuted_blocks=1 00:06:07.905 00:06:07.905 ' 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.905 --rc genhtml_branch_coverage=1 00:06:07.905 --rc genhtml_function_coverage=1 00:06:07.905 --rc genhtml_legend=1 00:06:07.905 --rc geninfo_all_blocks=1 00:06:07.905 --rc geninfo_unexecuted_blocks=1 00:06:07.905 00:06:07.905 ' 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.905 --rc genhtml_branch_coverage=1 00:06:07.905 --rc genhtml_function_coverage=1 00:06:07.905 --rc genhtml_legend=1 00:06:07.905 --rc geninfo_all_blocks=1 00:06:07.905 --rc geninfo_unexecuted_blocks=1 00:06:07.905 00:06:07.905 ' 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.905 --rc genhtml_branch_coverage=1 00:06:07.905 --rc genhtml_function_coverage=1 00:06:07.905 --rc genhtml_legend=1 00:06:07.905 --rc geninfo_all_blocks=1 00:06:07.905 --rc geninfo_unexecuted_blocks=1 00:06:07.905 00:06:07.905 ' 00:06:07.905 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:07.906 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:07.906 ************************************ 00:06:07.906 START TEST nvmf_abort 00:06:07.906 ************************************ 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:07.906 * Looking for test storage... 00:06:07.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:07.906 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.907 --rc genhtml_branch_coverage=1 00:06:07.907 --rc genhtml_function_coverage=1 00:06:07.907 --rc genhtml_legend=1 00:06:07.907 --rc geninfo_all_blocks=1 00:06:07.907 --rc geninfo_unexecuted_blocks=1 00:06:07.907 00:06:07.907 ' 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.907 --rc genhtml_branch_coverage=1 00:06:07.907 --rc genhtml_function_coverage=1 00:06:07.907 --rc genhtml_legend=1 00:06:07.907 --rc geninfo_all_blocks=1 00:06:07.907 --rc geninfo_unexecuted_blocks=1 00:06:07.907 00:06:07.907 ' 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.907 --rc genhtml_branch_coverage=1 00:06:07.907 --rc genhtml_function_coverage=1 00:06:07.907 --rc genhtml_legend=1 00:06:07.907 --rc geninfo_all_blocks=1 00:06:07.907 --rc geninfo_unexecuted_blocks=1 00:06:07.907 00:06:07.907 ' 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.907 --rc genhtml_branch_coverage=1 00:06:07.907 --rc genhtml_function_coverage=1 00:06:07.907 --rc genhtml_legend=1 00:06:07.907 --rc geninfo_all_blocks=1 00:06:07.907 --rc geninfo_unexecuted_blocks=1 00:06:07.907 00:06:07.907 ' 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.907 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:08.168 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:08.168 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@280 -- # nvmf_veth_init 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@223 -- # create_target_ns 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # create_main_bridge 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@105 -- # delete_main_bridge 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator0 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target0 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0 up 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target0_br 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target0 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:06:08.169 08:59:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:06:08.169 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:06:08.170 10.0.0.1 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:06:08.170 10.0.0.2 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator0 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:06:08.170 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target0_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator1 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target1 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1 up 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target1_br 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target1 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772163 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:06:08.431 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:06:08.432 10.0.0.3 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772164 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:06:08.432 10.0.0.4 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator1 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target1_br 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 2 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:08.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:06:08.432 00:06:08.432 --- 10.0.0.1 ping statistics --- 00:06:08.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.432 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:08.432 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:06:08.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:06:08.433 00:06:08.433 --- 10.0.0.2 ping statistics --- 00:06:08.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.433 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:06:08.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:08.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:06:08.433 00:06:08.433 --- 10.0.0.3 ping statistics --- 00:06:08.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.433 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:06:08.433 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:08.433 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:06:08.433 00:06:08.433 --- 10.0.0.4 ping statistics --- 00:06:08.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.433 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # return 0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:06:08.433 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:08.434 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=62204 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 62204 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 62204 ']' 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.693 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.694 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.694 08:59:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:08.694 [2024-11-20 08:59:47.483441] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:08.694 [2024-11-20 08:59:47.483561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.952 [2024-11-20 08:59:47.640211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.952 [2024-11-20 08:59:47.722558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:08.952 [2024-11-20 08:59:47.722665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:08.952 [2024-11-20 08:59:47.722699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.952 [2024-11-20 08:59:47.722714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.952 [2024-11-20 08:59:47.722729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:08.952 [2024-11-20 08:59:47.724412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.952 [2024-11-20 08:59:47.724576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.952 [2024-11-20 08:59:47.724598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 [2024-11-20 08:59:48.658137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 Malloc0 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 Delay0 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 [2024-11-20 08:59:48.737592] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.889 08:59:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:10.148 [2024-11-20 08:59:48.938512] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:12.099 Initializing NVMe Controllers 00:06:12.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:12.099 controller IO queue size 128 less than required 00:06:12.099 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:12.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:12.099 Initialization complete. Launching workers. 00:06:12.099 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27383 00:06:12.099 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27444, failed to submit 62 00:06:12.099 success 27387, unsuccessful 57, failed 0 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:12.099 08:59:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:06:12.357 rmmod nvme_tcp 00:06:12.357 rmmod nvme_fabrics 00:06:12.357 rmmod nvme_keyring 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 62204 ']' 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 62204 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 62204 ']' 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 62204 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62204 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:12.357 killing process with pid 62204 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62204' 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 62204 00:06:12.357 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 62204 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:06:12.616 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:06:12.876 ************************************ 00:06:12.876 END TEST nvmf_abort 00:06:12.876 ************************************ 00:06:12.876 00:06:12.876 real 0m4.915s 00:06:12.876 user 0m13.145s 00:06:12.876 sys 0m1.253s 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.876 ************************************ 00:06:12.876 START TEST nvmf_ns_hotplug_stress 00:06:12.876 ************************************ 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:12.876 * Looking for test storage... 00:06:12.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.876 --rc genhtml_branch_coverage=1 00:06:12.876 --rc genhtml_function_coverage=1 00:06:12.876 --rc genhtml_legend=1 00:06:12.876 --rc geninfo_all_blocks=1 00:06:12.876 --rc geninfo_unexecuted_blocks=1 00:06:12.876 00:06:12.876 ' 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.876 --rc genhtml_branch_coverage=1 00:06:12.876 --rc genhtml_function_coverage=1 00:06:12.876 --rc genhtml_legend=1 00:06:12.876 --rc geninfo_all_blocks=1 00:06:12.876 --rc geninfo_unexecuted_blocks=1 00:06:12.876 00:06:12.876 ' 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.876 --rc genhtml_branch_coverage=1 00:06:12.876 --rc genhtml_function_coverage=1 00:06:12.876 --rc genhtml_legend=1 00:06:12.876 --rc geninfo_all_blocks=1 00:06:12.876 --rc geninfo_unexecuted_blocks=1 00:06:12.876 00:06:12.876 ' 00:06:12.876 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:12.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.876 --rc genhtml_branch_coverage=1 00:06:12.876 --rc genhtml_function_coverage=1 00:06:12.876 --rc genhtml_legend=1 00:06:12.877 --rc geninfo_all_blocks=1 00:06:12.877 --rc geninfo_unexecuted_blocks=1 00:06:12.877 00:06:12.877 ' 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.877 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:13.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:13.137 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@280 -- # nvmf_veth_init 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@223 -- # create_target_ns 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # create_main_bridge 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@105 -- # delete_main_bridge 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator0 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target0 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0 up 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target0_br 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target0 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:06:13.138 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:06:13.138 10.0.0.1 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:06:13.139 10.0.0.2 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator0 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target0_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:06:13.139 08:59:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator1 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target1 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1 up 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target1_br 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target1 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:06:13.139 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772163 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:06:13.400 10.0.0.3 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772164 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:06:13.400 10.0.0.4 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator1 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target1_br 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:06:13.400 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 2 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:13.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:06:13.401 00:06:13.401 --- 10.0.0.1 ping statistics --- 00:06:13.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.401 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:06:13.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:06:13.401 00:06:13.401 --- 10.0.0.2 ping statistics --- 00:06:13.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.401 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:06:13.401 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:13.401 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.112 ms 00:06:13.401 00:06:13.401 --- 10.0.0.3 ping statistics --- 00:06:13.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.401 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:06:13.401 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:06:13.402 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:13.402 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:06:13.402 00:06:13.402 --- 10.0.0.4 ping statistics --- 00:06:13.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.402 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # return 0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.402 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=62526 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 62526 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62526 ']' 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.661 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.661 [2024-11-20 08:59:52.385365] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:13.661 [2024-11-20 08:59:52.385488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.661 [2024-11-20 08:59:52.543841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.920 [2024-11-20 08:59:52.617416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.920 [2024-11-20 08:59:52.617481] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.920 [2024-11-20 08:59:52.617507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.920 [2024-11-20 08:59:52.617519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.920 [2024-11-20 08:59:52.617528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.920 [2024-11-20 08:59:52.618785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.920 [2024-11-20 08:59:52.618925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.920 [2024-11-20 08:59:52.618933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:13.920 08:59:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:14.179 [2024-11-20 08:59:53.090828] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:14.438 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:14.697 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:14.956 [2024-11-20 08:59:53.699619] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:14.956 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.215 08:59:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:15.474 Malloc0 00:06:15.474 08:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.733 Delay0 00:06:15.733 08:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.992 08:59:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:16.251 NULL1 00:06:16.251 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:16.510 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62649 00:06:16.510 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:16.510 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:16.510 08:59:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.887 Read completed with error (sct=0, sc=11) 00:06:17.888 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.145 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:18.145 08:59:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:18.404 true 00:06:18.404 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:18.404 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.368 08:59:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.368 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.626 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:19.626 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:19.884 true 00:06:19.884 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:19.884 08:59:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.450 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.966 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:20.966 08:59:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:21.224 true 00:06:21.224 09:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:21.224 09:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.482 09:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.050 09:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:22.050 09:00:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:22.317 true 00:06:22.317 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:22.317 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.589 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.847 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:22.847 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:23.105 true 00:06:23.105 09:00:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:23.105 09:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.425 09:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.699 09:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:23.699 09:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:24.266 true 00:06:24.266 09:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:24.266 09:00:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.831 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.089 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:25.089 09:00:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:25.347 true 00:06:25.347 09:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:25.347 09:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.912 09:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.170 09:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:26.170 09:00:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:26.428 true 00:06:26.428 09:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:26.428 09:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.687 09:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.255 09:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:27.255 09:00:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:27.513 true 00:06:27.513 09:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:27.513 09:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.771 09:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.089 09:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:28.089 09:00:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:28.363 true 00:06:28.363 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:28.363 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.622 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.880 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:28.880 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:29.138 true 00:06:29.138 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:29.138 09:00:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.397 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.655 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:29.655 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:29.913 true 00:06:29.913 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:29.913 09:00:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.844 09:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.102 09:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:31.102 09:00:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:31.360 true 00:06:31.360 09:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:31.360 09:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.617 09:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.874 09:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:31.874 09:00:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:32.441 true 00:06:32.441 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:32.441 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.441 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.005 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:33.005 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:33.005 true 00:06:33.005 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:33.005 09:00:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.939 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.197 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:34.197 09:00:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:34.458 true 00:06:34.458 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:34.459 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.715 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.972 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:34.972 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:35.230 true 00:06:35.230 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:35.230 09:00:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.487 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.744 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:35.744 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:36.002 true 00:06:36.002 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:36.002 09:00:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.934 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.934 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.193 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:37.193 09:00:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:37.451 true 00:06:37.451 09:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:37.451 09:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.709 09:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.967 09:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:37.967 09:00:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:38.226 true 00:06:38.226 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:38.226 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.484 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.743 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:38.743 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:39.001 true 00:06:39.001 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:39.001 09:00:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.937 09:00:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.195 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:40.195 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:40.453 true 00:06:40.453 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:40.453 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.020 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.020 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:41.020 09:00:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:41.278 true 00:06:41.278 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:41.278 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.845 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.104 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:42.104 09:00:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:42.362 true 00:06:42.362 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:42.362 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.620 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.878 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:42.878 09:00:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:43.443 true 00:06:43.443 09:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:43.443 09:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.443 09:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.771 09:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:43.771 09:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:44.056 true 00:06:44.056 09:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:44.056 09:00:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.989 09:00:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.247 09:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:45.248 09:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:45.505 true 00:06:45.505 09:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:45.505 09:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.763 09:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.021 09:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:46.021 09:00:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:46.585 true 00:06:46.585 09:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:46.585 09:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.842 09:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.842 Initializing NVMe Controllers 00:06:46.842 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:46.842 Controller IO queue size 128, less than required. 00:06:46.842 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.842 Controller IO queue size 128, less than required. 00:06:46.842 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:46.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:46.842 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:46.842 Initialization complete. Launching workers. 00:06:46.842 ======================================================== 00:06:46.842 Latency(us) 00:06:46.842 Device Information : IOPS MiB/s Average min max 00:06:46.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 851.67 0.42 52677.72 3268.64 1016473.56 00:06:46.842 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6926.74 3.38 18478.24 3618.79 586148.33 00:06:46.842 ======================================================== 00:06:46.842 Total : 7778.40 3.80 22222.79 3268.64 1016473.56 00:06:46.842 00:06:47.100 09:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:47.100 09:00:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:47.359 true 00:06:47.359 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62649 00:06:47.359 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62649) - No such process 00:06:47.359 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62649 00:06:47.359 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.635 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.944 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:47.944 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:47.944 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:47.944 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.944 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:48.202 null0 00:06:48.202 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.202 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.202 09:00:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:48.459 null1 00:06:48.459 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.459 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.459 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:48.716 null2 00:06:48.716 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.716 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.717 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:48.975 null3 00:06:48.975 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.975 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.975 09:00:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:49.232 null4 00:06:49.232 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.232 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.232 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:49.490 null5 00:06:49.490 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.490 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.490 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:49.748 null6 00:06:49.748 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:49.748 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:49.748 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:50.007 null7 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.007 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63684 63685 63687 63689 63691 63692 63695 63696 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.008 09:00:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.574 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.574 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.574 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.574 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.574 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.574 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.574 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.832 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.097 09:00:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.355 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.614 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.872 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.131 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.131 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.131 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.131 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.131 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.131 09:00:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.131 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.131 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.131 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.389 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.390 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.390 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.390 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.390 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.390 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.390 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.390 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.647 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.905 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.905 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.906 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.163 09:00:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.163 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.163 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:53.421 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:53.679 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:53.937 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.194 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.194 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.194 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.195 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.195 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.195 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.195 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 09:00:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:54.195 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.195 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.195 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.195 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.195 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:54.453 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:54.712 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:54.712 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:54.712 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:54.712 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.712 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.712 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:54.970 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:55.228 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.228 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.228 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:55.228 09:00:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.228 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.228 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.228 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.486 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:55.743 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:55.743 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:55.743 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:55.743 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:55.743 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.001 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.002 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:56.002 09:00:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:56.259 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:56.259 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.259 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.259 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:56.259 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:56.259 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.260 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.260 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:56.517 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.517 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.517 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.517 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:56.517 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:56.517 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.518 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.518 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.776 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.033 09:00:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.292 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.550 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.808 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:57.808 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:57.808 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:57.808 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.066 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.066 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.066 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.066 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:06:58.067 rmmod nvme_tcp 00:06:58.067 rmmod nvme_fabrics 00:06:58.067 rmmod nvme_keyring 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 62526 ']' 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 62526 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62526 ']' 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62526 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62526 00:06:58.067 killing process with pid 62526 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62526' 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62526 00:06:58.067 09:00:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62526 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:58.325 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:06:58.326 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:06:58.326 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:06:58.326 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:06:58.326 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:06:58.584 ************************************ 00:06:58.584 END TEST nvmf_ns_hotplug_stress 00:06:58.584 ************************************ 00:06:58.584 00:06:58.584 real 0m45.727s 00:06:58.584 user 3m51.242s 00:06:58.584 sys 0m14.137s 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:58.584 ************************************ 00:06:58.584 START TEST nvmf_delete_subsystem 00:06:58.584 ************************************ 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:58.584 * Looking for test storage... 00:06:58.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.584 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.843 --rc genhtml_branch_coverage=1 00:06:58.843 --rc genhtml_function_coverage=1 00:06:58.843 --rc genhtml_legend=1 00:06:58.843 --rc geninfo_all_blocks=1 00:06:58.843 --rc geninfo_unexecuted_blocks=1 00:06:58.843 00:06:58.843 ' 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.843 --rc genhtml_branch_coverage=1 00:06:58.843 --rc genhtml_function_coverage=1 00:06:58.843 --rc genhtml_legend=1 00:06:58.843 --rc geninfo_all_blocks=1 00:06:58.843 --rc geninfo_unexecuted_blocks=1 00:06:58.843 00:06:58.843 ' 00:06:58.843 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.843 --rc genhtml_branch_coverage=1 00:06:58.843 --rc genhtml_function_coverage=1 00:06:58.843 --rc genhtml_legend=1 00:06:58.843 --rc geninfo_all_blocks=1 00:06:58.843 --rc geninfo_unexecuted_blocks=1 00:06:58.843 00:06:58.844 ' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.844 --rc genhtml_branch_coverage=1 00:06:58.844 --rc genhtml_function_coverage=1 00:06:58.844 --rc genhtml_legend=1 00:06:58.844 --rc geninfo_all_blocks=1 00:06:58.844 --rc geninfo_unexecuted_blocks=1 00:06:58.844 00:06:58.844 ' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:58.844 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@280 -- # nvmf_veth_init 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@223 -- # create_target_ns 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # create_main_bridge 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@105 -- # delete_main_bridge 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:58.844 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator0 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target0 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0 up 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target0_br 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target0 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:06:58.845 10.0.0.1 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:06:58.845 10.0.0.2 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator0 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:06:58.845 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target0_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator1 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target1 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1 up 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target1_br 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:06:59.106 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772163 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:06:59.107 10.0.0.3 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772164 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:06:59.107 10.0.0.4 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target1_br 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 2 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:59.107 09:00:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:59.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:59.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:06:59.107 00:06:59.107 --- 10.0.0.1 ping statistics --- 00:06:59.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.107 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:06:59.107 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:06:59.108 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:06:59.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:59.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:06:59.374 00:06:59.374 --- 10.0.0.2 ping statistics --- 00:06:59.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.374 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:06:59.374 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:59.374 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:06:59.374 00:06:59.374 --- 10.0.0.3 ping statistics --- 00:06:59.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.374 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:06:59.374 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:06:59.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:59.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.121 ms 00:06:59.375 00:06:59.375 --- 10.0.0.4 ping statistics --- 00:06:59.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:59.375 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # return 0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=65111 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 65111 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 65111 ']' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.375 09:00:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.376 [2024-11-20 09:00:38.231282] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:59.376 [2024-11-20 09:00:38.231400] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.633 [2024-11-20 09:00:38.383168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.633 [2024-11-20 09:00:38.458033] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:59.633 [2024-11-20 09:00:38.458086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:59.633 [2024-11-20 09:00:38.458098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:59.633 [2024-11-20 09:00:38.458107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:59.633 [2024-11-20 09:00:38.458114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:59.633 [2024-11-20 09:00:38.459242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.633 [2024-11-20 09:00:38.459255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.565 [2024-11-20 09:00:39.396102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.565 [2024-11-20 09:00:39.412262] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.565 NULL1 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.565 Delay0 00:07:00.565 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.566 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.566 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.566 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:00.566 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.566 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=65168 00:07:00.566 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:00.566 09:00:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:00.822 [2024-11-20 09:00:39.687512] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:02.721 09:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:02.721 09:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.721 09:00:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 [2024-11-20 09:00:41.817377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ab000d350 is same with the state(6) to be set 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Write completed with error (sct=0, sc=8) 00:07:02.979 starting I/O failed: -6 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.979 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 starting I/O failed: -6 00:07:02.980 [2024-11-20 09:00:41.818588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13467e0 is same with the state(6) to be set 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Write completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:02.980 Read completed with error (sct=0, sc=8) 00:07:03.913 [2024-11-20 09:00:42.701654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1341ee0 is same with the state(6) to be set 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 [2024-11-20 09:00:42.813666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1345a50 is same with the state(6) to be set 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 [2024-11-20 09:00:42.813953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1348ea0 is same with the state(6) to be set 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Write completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.913 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 [2024-11-20 09:00:42.816164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ab000d680 is same with the state(6) to be set 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Read completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 Write completed with error (sct=0, sc=8) 00:07:03.914 [2024-11-20 09:00:42.819825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9ab000d020 is same with the state(6) to be set 00:07:03.914 Initializing NVMe Controllers 00:07:03.914 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:03.914 Controller IO queue size 128, less than required. 00:07:03.914 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:03.914 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:03.914 Initialization complete. Launching workers. 00:07:03.914 ======================================================== 00:07:03.914 Latency(us) 00:07:03.914 Device Information : IOPS MiB/s Average min max 00:07:03.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.43 0.08 995592.80 394.10 1996967.04 00:07:03.914 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.11 0.08 1023429.44 865.23 2003774.83 00:07:03.914 ======================================================== 00:07:03.914 Total : 313.54 0.15 1009363.73 394.10 2003774.83 00:07:03.914 00:07:03.914 [2024-11-20 09:00:42.820864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1341ee0 (9): Bad file descriptor 00:07:03.914 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:03.914 09:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.914 09:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:03.914 09:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65168 00:07:03.914 09:00:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 65168 00:07:04.480 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (65168) - No such process 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 65168 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 65168 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 65168 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.480 [2024-11-20 09:00:43.347154] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=65219 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.480 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:04.737 [2024-11-20 09:00:43.615156] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:04.994 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.994 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:04.994 09:00:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.559 09:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.559 09:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:05.559 09:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.123 09:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.123 09:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:06.123 09:00:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.689 09:00:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.689 09:00:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:06.689 09:00:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.254 09:00:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.254 09:00:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:07.254 09:00:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.512 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.512 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:07.512 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.077 Initializing NVMe Controllers 00:07:08.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.077 Controller IO queue size 128, less than required. 00:07:08.077 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:08.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:08.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:08.077 Initialization complete. Launching workers. 00:07:08.077 ======================================================== 00:07:08.077 Latency(us) 00:07:08.077 Device Information : IOPS MiB/s Average min max 00:07:08.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003705.64 1000158.95 1012211.96 00:07:08.077 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006064.53 1000208.01 1041961.71 00:07:08.077 ======================================================== 00:07:08.077 Total : 256.00 0.12 1004885.09 1000158.95 1041961.71 00:07:08.077 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 65219 00:07:08.077 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (65219) - No such process 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 65219 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:08.077 rmmod nvme_tcp 00:07:08.077 rmmod nvme_fabrics 00:07:08.077 rmmod nvme_keyring 00:07:08.077 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:08.335 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:07:08.335 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:07:08.335 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 65111 ']' 00:07:08.335 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 65111 00:07:08.335 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 65111 ']' 00:07:08.335 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 65111 00:07:08.335 09:00:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:08.335 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.335 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65111 00:07:08.335 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.335 killing process with pid 65111 00:07:08.335 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.335 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65111' 00:07:08.335 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 65111 00:07:08.335 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 65111 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:07:08.593 ************************************ 00:07:08.593 END TEST nvmf_delete_subsystem 00:07:08.593 ************************************ 00:07:08.593 00:07:08.593 real 0m10.108s 00:07:08.593 user 0m30.197s 00:07:08.593 sys 0m1.729s 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.593 09:00:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:08.853 ************************************ 00:07:08.853 START TEST nvmf_host_management 00:07:08.853 ************************************ 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:08.853 * Looking for test storage... 00:07:08.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.853 --rc genhtml_branch_coverage=1 00:07:08.853 --rc genhtml_function_coverage=1 00:07:08.853 --rc genhtml_legend=1 00:07:08.853 --rc geninfo_all_blocks=1 00:07:08.853 --rc geninfo_unexecuted_blocks=1 00:07:08.853 00:07:08.853 ' 00:07:08.853 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:08.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.853 --rc genhtml_branch_coverage=1 00:07:08.853 --rc genhtml_function_coverage=1 00:07:08.853 --rc genhtml_legend=1 00:07:08.854 --rc geninfo_all_blocks=1 00:07:08.854 --rc geninfo_unexecuted_blocks=1 00:07:08.854 00:07:08.854 ' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:08.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.854 --rc genhtml_branch_coverage=1 00:07:08.854 --rc genhtml_function_coverage=1 00:07:08.854 --rc genhtml_legend=1 00:07:08.854 --rc geninfo_all_blocks=1 00:07:08.854 --rc geninfo_unexecuted_blocks=1 00:07:08.854 00:07:08.854 ' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:08.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.854 --rc genhtml_branch_coverage=1 00:07:08.854 --rc genhtml_function_coverage=1 00:07:08.854 --rc genhtml_legend=1 00:07:08.854 --rc geninfo_all_blocks=1 00:07:08.854 --rc geninfo_unexecuted_blocks=1 00:07:08.854 00:07:08.854 ' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:08.854 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@280 -- # nvmf_veth_init 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@223 -- # create_target_ns 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # create_main_bridge 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@105 -- # delete_main_bridge 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:07:08.854 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator0 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target0 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0 up 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target0_br 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target0 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:07:09.115 10.0.0.1 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:07:09.115 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:07:09.116 10.0.0.2 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator0 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target0_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator1 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:07:09.116 09:00:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:07:09.116 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target1 00:07:09.116 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:07:09.116 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.116 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:07:09.116 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1 up 00:07:09.376 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target1_br 00:07:09.376 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:07:09.376 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.376 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target1 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772163 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:07:09.377 10.0.0.3 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772164 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:07:09.377 10.0.0.4 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator1 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target1_br 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 2 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:09.377 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:09.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:07:09.378 00:07:09.378 --- 10.0.0.1 ping statistics --- 00:07:09.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.378 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:07:09.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:07:09.378 00:07:09.378 --- 10.0.0.2 ping statistics --- 00:07:09.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.378 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:07:09.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:09.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:07:09.378 00:07:09.378 --- 10.0.0.3 ping statistics --- 00:07:09.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.378 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:07:09.378 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:07:09.379 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:09.379 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:07:09.379 00:07:09.379 --- 10.0.0.4 ping statistics --- 00:07:09.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.379 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # return 0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:09.379 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=65499 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 65499 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65499 ']' 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.380 09:00:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:09.639 [2024-11-20 09:00:48.360803] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:09.639 [2024-11-20 09:00:48.361901] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.639 [2024-11-20 09:00:48.512744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:09.897 [2024-11-20 09:00:48.597373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.897 [2024-11-20 09:00:48.597444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.897 [2024-11-20 09:00:48.597456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.897 [2024-11-20 09:00:48.597465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.897 [2024-11-20 09:00:48.597472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.897 [2024-11-20 09:00:48.598651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.897 [2024-11-20 09:00:48.598712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:09.897 [2024-11-20 09:00:48.598829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.897 [2024-11-20 09:00:48.598831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.831 [2024-11-20 09:00:49.507294] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.831 Malloc0 00:07:10.831 [2024-11-20 09:00:49.611506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65582 00:07:10.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65582 /var/tmp/bdevperf.sock 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65582 ']' 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:10.831 { 00:07:10.831 "params": { 00:07:10.831 "name": "Nvme$subsystem", 00:07:10.831 "trtype": "$TEST_TRANSPORT", 00:07:10.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:10.831 "adrfam": "ipv4", 00:07:10.831 "trsvcid": "$NVMF_PORT", 00:07:10.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:10.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:10.831 "hdgst": ${hdgst:-false}, 00:07:10.831 "ddgst": ${ddgst:-false} 00:07:10.831 }, 00:07:10.831 "method": "bdev_nvme_attach_controller" 00:07:10.831 } 00:07:10.831 EOF 00:07:10.831 )") 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:10.831 09:00:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:10.831 "params": { 00:07:10.831 "name": "Nvme0", 00:07:10.831 "trtype": "tcp", 00:07:10.831 "traddr": "10.0.0.2", 00:07:10.831 "adrfam": "ipv4", 00:07:10.831 "trsvcid": "4420", 00:07:10.831 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.831 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:10.831 "hdgst": false, 00:07:10.831 "ddgst": false 00:07:10.831 }, 00:07:10.831 "method": "bdev_nvme_attach_controller" 00:07:10.831 }' 00:07:10.831 [2024-11-20 09:00:49.728531] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:10.832 [2024-11-20 09:00:49.728667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65582 ] 00:07:11.397 [2024-11-20 09:00:50.121631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.397 [2024-11-20 09:00:50.232002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.653 Running I/O for 10 seconds... 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.219 09:00:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=259 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 259 -ge 100 ']' 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.219 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.219 [2024-11-20 09:00:51.017101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.219 [2024-11-20 09:00:51.017885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.219 [2024-11-20 09:00:51.018043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.018163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.018276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.018407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.018553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.018646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.018799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.018941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.019971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe2c470 is same with the state(6) to be set 00:07:12.220 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.220 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:12.220 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.220 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:12.220 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.220 09:00:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:12.220 [2024-11-20 09:00:51.059797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.220 [2024-11-20 09:00:51.059896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.059921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.220 [2024-11-20 09:00:51.059940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.059962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.220 [2024-11-20 09:00:51.059978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.060001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:12.220 [2024-11-20 09:00:51.060016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.060033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124e660 is same with the state(6) to be set 00:07:12.220 [2024-11-20 09:00:51.061300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.061978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.061995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.220 [2024-11-20 09:00:51.062435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.220 [2024-11-20 09:00:51.062456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.062971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.062996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.063982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.063999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.064029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.064046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.064075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.064091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.064119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.064152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.064182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.064198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.064217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.064234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.221 [2024-11-20 09:00:51.064264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:12.221 [2024-11-20 09:00:51.064280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:12.222 [2024-11-20 09:00:51.069042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:12.222 task offset: 40960 on job bdev=Nvme0n1 fails 00:07:12.222 00:07:12.222 Latency(us) 00:07:12.222 [2024-11-20T09:00:51.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.222 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:12.222 Job: Nvme0n1 ended in about 0.61 seconds with error 00:07:12.222 Verification LBA range: start 0x0 length 0x400 00:07:12.222 Nvme0n1 : 0.61 528.71 33.04 105.74 0.00 93787.46 6047.19 93418.59 00:07:12.222 [2024-11-20T09:00:51.141Z] =================================================================================================================== 00:07:12.222 [2024-11-20T09:00:51.141Z] Total : 528.71 33.04 105.74 0.00 93787.46 6047.19 93418.59 00:07:12.222 [2024-11-20 09:00:51.076642] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.222 [2024-11-20 09:00:51.076713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124e660 (9): Bad file descriptor 00:07:12.222 [2024-11-20 09:00:51.087266] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65582 00:07:13.155 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65582) - No such process 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:13.155 { 00:07:13.155 "params": { 00:07:13.155 "name": "Nvme$subsystem", 00:07:13.155 "trtype": "$TEST_TRANSPORT", 00:07:13.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:13.155 "adrfam": "ipv4", 00:07:13.155 "trsvcid": "$NVMF_PORT", 00:07:13.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:13.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:13.155 "hdgst": ${hdgst:-false}, 00:07:13.155 "ddgst": ${ddgst:-false} 00:07:13.155 }, 00:07:13.155 "method": "bdev_nvme_attach_controller" 00:07:13.155 } 00:07:13.155 EOF 00:07:13.155 )") 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:13.155 09:00:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:13.155 "params": { 00:07:13.155 "name": "Nvme0", 00:07:13.155 "trtype": "tcp", 00:07:13.155 "traddr": "10.0.0.2", 00:07:13.155 "adrfam": "ipv4", 00:07:13.155 "trsvcid": "4420", 00:07:13.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:13.155 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:13.155 "hdgst": false, 00:07:13.155 "ddgst": false 00:07:13.155 }, 00:07:13.155 "method": "bdev_nvme_attach_controller" 00:07:13.155 }' 00:07:13.413 [2024-11-20 09:00:52.102573] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:13.413 [2024-11-20 09:00:52.102702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65632 ] 00:07:13.413 [2024-11-20 09:00:52.312537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.670 [2024-11-20 09:00:52.397659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.671 Running I/O for 1 seconds... 00:07:15.045 1408.00 IOPS, 88.00 MiB/s 00:07:15.045 Latency(us) 00:07:15.045 [2024-11-20T09:00:53.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.045 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:15.045 Verification LBA range: start 0x0 length 0x400 00:07:15.045 Nvme0n1 : 1.02 1441.24 90.08 0.00 0.00 43537.92 6255.71 37891.72 00:07:15.045 [2024-11-20T09:00:53.964Z] =================================================================================================================== 00:07:15.045 [2024-11-20T09:00:53.964Z] Total : 1441.24 90.08 0.00 0.00 43537.92 6255.71 37891.72 00:07:15.045 09:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:15.045 09:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:15.045 09:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:15.045 09:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:15.045 09:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:15.045 09:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:15.045 09:00:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:15.304 rmmod nvme_tcp 00:07:15.304 rmmod nvme_fabrics 00:07:15.304 rmmod nvme_keyring 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 65499 ']' 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 65499 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65499 ']' 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65499 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65499 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.304 killing process with pid 65499 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65499' 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65499 00:07:15.304 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65499 00:07:15.561 [2024-11-20 09:00:54.320405] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:07:15.561 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:07:15.562 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:15.562 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:07:15.562 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:07:15.906 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:15.907 00:07:15.907 real 0m7.035s 00:07:15.907 user 0m26.417s 00:07:15.907 sys 0m1.681s 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:15.907 ************************************ 00:07:15.907 END TEST nvmf_host_management 00:07:15.907 ************************************ 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.907 ************************************ 00:07:15.907 START TEST nvmf_lvol 00:07:15.907 ************************************ 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:15.907 * Looking for test storage... 00:07:15.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.907 --rc genhtml_branch_coverage=1 00:07:15.907 --rc genhtml_function_coverage=1 00:07:15.907 --rc genhtml_legend=1 00:07:15.907 --rc geninfo_all_blocks=1 00:07:15.907 --rc geninfo_unexecuted_blocks=1 00:07:15.907 00:07:15.907 ' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.907 --rc genhtml_branch_coverage=1 00:07:15.907 --rc genhtml_function_coverage=1 00:07:15.907 --rc genhtml_legend=1 00:07:15.907 --rc geninfo_all_blocks=1 00:07:15.907 --rc geninfo_unexecuted_blocks=1 00:07:15.907 00:07:15.907 ' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.907 --rc genhtml_branch_coverage=1 00:07:15.907 --rc genhtml_function_coverage=1 00:07:15.907 --rc genhtml_legend=1 00:07:15.907 --rc geninfo_all_blocks=1 00:07:15.907 --rc geninfo_unexecuted_blocks=1 00:07:15.907 00:07:15.907 ' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.907 --rc genhtml_branch_coverage=1 00:07:15.907 --rc genhtml_function_coverage=1 00:07:15.907 --rc genhtml_legend=1 00:07:15.907 --rc geninfo_all_blocks=1 00:07:15.907 --rc geninfo_unexecuted_blocks=1 00:07:15.907 00:07:15.907 ' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:15.907 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:15.908 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@280 -- # nvmf_veth_init 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@223 -- # create_target_ns 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:15.908 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # create_main_bridge 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@105 -- # delete_main_bridge 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:07:16.167 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0 up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:07:16.168 10.0.0.1 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:07:16.168 10.0.0.2 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target0_br 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:16.168 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator1 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target1 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1 up 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target1_br 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:07:16.169 09:00:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target1 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772163 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:07:16.169 10.0.0.3 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772164 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:07:16.169 10.0.0.4 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator1 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:07:16.169 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target1_br 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 2 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:16.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:07:16.428 00:07:16.428 --- 10.0.0.1 ping statistics --- 00:07:16.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.428 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.428 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:07:16.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:07:16.429 00:07:16.429 --- 10.0.0.2 ping statistics --- 00:07:16.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.429 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:07:16.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:16.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:07:16.429 00:07:16.429 --- 10.0.0.3 ping statistics --- 00:07:16.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.429 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:07:16.429 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:16.429 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:07:16.429 00:07:16.429 --- 10.0.0.4 ping statistics --- 00:07:16.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.429 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # return 0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.429 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=65902 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 65902 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65902 ']' 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.430 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.430 [2024-11-20 09:00:55.339349] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:16.430 [2024-11-20 09:00:55.339504] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.688 [2024-11-20 09:00:55.513099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.688 [2024-11-20 09:00:55.578324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.688 [2024-11-20 09:00:55.578375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.688 [2024-11-20 09:00:55.578386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.688 [2024-11-20 09:00:55.578395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.688 [2024-11-20 09:00:55.578403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.688 [2024-11-20 09:00:55.579522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.688 [2024-11-20 09:00:55.579631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.688 [2024-11-20 09:00:55.579636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.946 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.946 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:16.946 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:16.946 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.946 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.946 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.946 09:00:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.205 [2024-11-20 09:00:56.035885] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.205 09:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:17.770 09:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:17.770 09:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:18.027 09:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:18.027 09:00:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:18.285 09:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:18.543 09:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e8c075cc-1fa6-4a8f-92cd-cc5ac6f84241 00:07:18.543 09:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e8c075cc-1fa6-4a8f-92cd-cc5ac6f84241 lvol 20 00:07:19.108 09:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f85af940-449c-434b-a24a-416e58cf32e5 00:07:19.108 09:00:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.366 09:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f85af940-449c-434b-a24a-416e58cf32e5 00:07:19.624 09:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:19.883 [2024-11-20 09:00:58.664102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.883 09:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.140 09:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=66042 00:07:20.141 09:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:20.141 09:00:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:21.075 09:00:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot f85af940-449c-434b-a24a-416e58cf32e5 MY_SNAPSHOT 00:07:21.643 09:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=696f760f-d3ee-4452-a2a7-4fdd04652e00 00:07:21.643 09:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize f85af940-449c-434b-a24a-416e58cf32e5 30 00:07:21.901 09:01:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 696f760f-d3ee-4452-a2a7-4fdd04652e00 MY_CLONE 00:07:22.159 09:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=df3bab3e-2c05-4669-aaa5-67535deeea11 00:07:22.159 09:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate df3bab3e-2c05-4669-aaa5-67535deeea11 00:07:23.094 09:01:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 66042 00:07:31.203 Initializing NVMe Controllers 00:07:31.203 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:31.203 Controller IO queue size 128, less than required. 00:07:31.203 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:31.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:31.203 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:31.203 Initialization complete. Launching workers. 00:07:31.203 ======================================================== 00:07:31.203 Latency(us) 00:07:31.203 Device Information : IOPS MiB/s Average min max 00:07:31.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10503.20 41.03 12195.28 2253.18 56760.57 00:07:31.203 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10399.80 40.62 12311.31 2209.83 49866.85 00:07:31.203 ======================================================== 00:07:31.203 Total : 20903.00 81.65 12253.01 2209.83 56760.57 00:07:31.203 00:07:31.203 09:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:31.203 09:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f85af940-449c-434b-a24a-416e58cf32e5 00:07:31.203 09:01:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e8c075cc-1fa6-4a8f-92cd-cc5ac6f84241 00:07:31.460 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:31.460 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:31.460 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:31.460 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:31.461 rmmod nvme_tcp 00:07:31.461 rmmod nvme_fabrics 00:07:31.461 rmmod nvme_keyring 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 65902 ']' 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 65902 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65902 ']' 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65902 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65902 00:07:31.461 killing process with pid 65902 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65902' 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65902 00:07:31.461 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65902 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:07:31.718 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:07:31.978 00:07:31.978 real 0m16.082s 00:07:31.978 user 1m6.611s 00:07:31.978 sys 0m4.006s 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.978 ************************************ 00:07:31.978 END TEST nvmf_lvol 00:07:31.978 ************************************ 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:31.978 ************************************ 00:07:31.978 START TEST nvmf_lvs_grow 00:07:31.978 ************************************ 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:31.978 * Looking for test storage... 00:07:31.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:31.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.978 --rc genhtml_branch_coverage=1 00:07:31.978 --rc genhtml_function_coverage=1 00:07:31.978 --rc genhtml_legend=1 00:07:31.978 --rc geninfo_all_blocks=1 00:07:31.978 --rc geninfo_unexecuted_blocks=1 00:07:31.978 00:07:31.978 ' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:31.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.978 --rc genhtml_branch_coverage=1 00:07:31.978 --rc genhtml_function_coverage=1 00:07:31.978 --rc genhtml_legend=1 00:07:31.978 --rc geninfo_all_blocks=1 00:07:31.978 --rc geninfo_unexecuted_blocks=1 00:07:31.978 00:07:31.978 ' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:31.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.978 --rc genhtml_branch_coverage=1 00:07:31.978 --rc genhtml_function_coverage=1 00:07:31.978 --rc genhtml_legend=1 00:07:31.978 --rc geninfo_all_blocks=1 00:07:31.978 --rc geninfo_unexecuted_blocks=1 00:07:31.978 00:07:31.978 ' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:31.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:31.978 --rc genhtml_branch_coverage=1 00:07:31.978 --rc genhtml_function_coverage=1 00:07:31.978 --rc genhtml_legend=1 00:07:31.978 --rc geninfo_all_blocks=1 00:07:31.978 --rc geninfo_unexecuted_blocks=1 00:07:31.978 00:07:31.978 ' 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.978 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.242 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:32.243 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@280 -- # nvmf_veth_init 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@223 -- # create_target_ns 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # create_main_bridge 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@105 -- # delete_main_bridge 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator0 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target0 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0 up 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target0_br 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:07:32.243 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target0 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:07:32.244 09:01:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:07:32.244 10.0.0.1 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:07:32.244 10.0.0.2 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator0 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target0_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator1 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target1 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1 up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target1_br 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target1 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:07:32.244 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772163 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:07:32.245 10.0.0.3 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772164 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:07:32.245 10.0.0.4 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator1 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:07:32.245 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target1_br 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:32.508 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 2 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:32.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:32.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:07:32.509 00:07:32.509 --- 10.0.0.1 ping statistics --- 00:07:32.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.509 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:07:32.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:32.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:07:32.509 00:07:32.509 --- 10.0.0.2 ping statistics --- 00:07:32.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.509 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:07:32.509 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:32.509 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:07:32.509 00:07:32.509 --- 10.0.0.3 ping statistics --- 00:07:32.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.509 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:07:32.509 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:07:32.510 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:32.510 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:07:32.510 00:07:32.510 --- 10.0.0.4 ping statistics --- 00:07:32.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:32.510 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # return 0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=66460 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 66460 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66460 ']' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.510 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.511 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:32.769 [2024-11-20 09:01:11.428412] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:32.769 [2024-11-20 09:01:11.428521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.769 [2024-11-20 09:01:11.581625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.769 [2024-11-20 09:01:11.648861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.769 [2024-11-20 09:01:11.648930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.769 [2024-11-20 09:01:11.648944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.769 [2024-11-20 09:01:11.648955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.769 [2024-11-20 09:01:11.648964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.769 [2024-11-20 09:01:11.649450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.027 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.027 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:33.027 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:33.027 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.027 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.027 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.027 09:01:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:33.285 [2024-11-20 09:01:12.098553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:33.285 ************************************ 00:07:33.285 START TEST lvs_grow_clean 00:07:33.285 ************************************ 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:33.285 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:33.545 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:33.545 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:34.111 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:34.111 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:34.111 09:01:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:34.369 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:34.369 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:34.369 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 lvol 150 00:07:34.627 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4f0f8fd2-cdfa-420a-bd9e-33dcf544e400 00:07:34.627 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:34.627 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:34.886 [2024-11-20 09:01:13.732749] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:34.886 [2024-11-20 09:01:13.732843] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:34.886 true 00:07:34.886 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:34.886 09:01:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:35.144 09:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:35.144 09:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:35.401 09:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f0f8fd2-cdfa-420a-bd9e-33dcf544e400 00:07:35.659 09:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:35.916 [2024-11-20 09:01:14.809365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.916 09:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66620 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66620 /var/tmp/bdevperf.sock 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66620 ']' 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.489 09:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:36.489 [2024-11-20 09:01:15.146927] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:36.490 [2024-11-20 09:01:15.147035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66620 ] 00:07:36.490 [2024-11-20 09:01:15.324053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.747 [2024-11-20 09:01:15.410306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.313 09:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.313 09:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:37.313 09:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:37.570 Nvme0n1 00:07:37.570 09:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:38.136 [ 00:07:38.136 { 00:07:38.136 "aliases": [ 00:07:38.136 "4f0f8fd2-cdfa-420a-bd9e-33dcf544e400" 00:07:38.136 ], 00:07:38.136 "assigned_rate_limits": { 00:07:38.136 "r_mbytes_per_sec": 0, 00:07:38.136 "rw_ios_per_sec": 0, 00:07:38.136 "rw_mbytes_per_sec": 0, 00:07:38.136 "w_mbytes_per_sec": 0 00:07:38.136 }, 00:07:38.136 "block_size": 4096, 00:07:38.136 "claimed": false, 00:07:38.136 "driver_specific": { 00:07:38.136 "mp_policy": "active_passive", 00:07:38.136 "nvme": [ 00:07:38.136 { 00:07:38.136 "ctrlr_data": { 00:07:38.136 "ana_reporting": false, 00:07:38.136 "cntlid": 1, 00:07:38.136 "firmware_revision": "25.01", 00:07:38.136 "model_number": "SPDK bdev Controller", 00:07:38.136 "multi_ctrlr": true, 00:07:38.136 "oacs": { 00:07:38.136 "firmware": 0, 00:07:38.136 "format": 0, 00:07:38.136 "ns_manage": 0, 00:07:38.136 "security": 0 00:07:38.136 }, 00:07:38.136 "serial_number": "SPDK0", 00:07:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.136 "vendor_id": "0x8086" 00:07:38.136 }, 00:07:38.136 "ns_data": { 00:07:38.136 "can_share": true, 00:07:38.136 "id": 1 00:07:38.136 }, 00:07:38.136 "trid": { 00:07:38.136 "adrfam": "IPv4", 00:07:38.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.136 "traddr": "10.0.0.2", 00:07:38.136 "trsvcid": "4420", 00:07:38.136 "trtype": "TCP" 00:07:38.136 }, 00:07:38.136 "vs": { 00:07:38.136 "nvme_version": "1.3" 00:07:38.136 } 00:07:38.136 } 00:07:38.136 ] 00:07:38.136 }, 00:07:38.136 "memory_domains": [ 00:07:38.136 { 00:07:38.136 "dma_device_id": "system", 00:07:38.136 "dma_device_type": 1 00:07:38.136 } 00:07:38.136 ], 00:07:38.136 "name": "Nvme0n1", 00:07:38.136 "num_blocks": 38912, 00:07:38.136 "numa_id": -1, 00:07:38.136 "product_name": "NVMe disk", 00:07:38.136 "supported_io_types": { 00:07:38.136 "abort": true, 00:07:38.136 "compare": true, 00:07:38.136 "compare_and_write": true, 00:07:38.136 "copy": true, 00:07:38.136 "flush": true, 00:07:38.136 "get_zone_info": false, 00:07:38.136 "nvme_admin": true, 00:07:38.136 "nvme_io": true, 00:07:38.136 "nvme_io_md": false, 00:07:38.136 "nvme_iov_md": false, 00:07:38.136 "read": true, 00:07:38.136 "reset": true, 00:07:38.136 "seek_data": false, 00:07:38.136 "seek_hole": false, 00:07:38.136 "unmap": true, 00:07:38.136 "write": true, 00:07:38.136 "write_zeroes": true, 00:07:38.136 "zcopy": false, 00:07:38.136 "zone_append": false, 00:07:38.136 "zone_management": false 00:07:38.136 }, 00:07:38.136 "uuid": "4f0f8fd2-cdfa-420a-bd9e-33dcf544e400", 00:07:38.136 "zoned": false 00:07:38.136 } 00:07:38.136 ] 00:07:38.136 09:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66662 00:07:38.136 09:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:38.136 09:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:38.136 Running I/O for 10 seconds... 00:07:39.069 Latency(us) 00:07:39.069 [2024-11-20T09:01:17.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.069 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.069 Nvme0n1 : 1.00 7696.00 30.06 0.00 0.00 0.00 0.00 0.00 00:07:39.069 [2024-11-20T09:01:17.988Z] =================================================================================================================== 00:07:39.069 [2024-11-20T09:01:17.988Z] Total : 7696.00 30.06 0.00 0.00 0.00 0.00 0.00 00:07:39.069 00:07:40.004 09:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:40.262 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.262 Nvme0n1 : 2.00 7465.50 29.16 0.00 0.00 0.00 0.00 0.00 00:07:40.262 [2024-11-20T09:01:19.181Z] =================================================================================================================== 00:07:40.262 [2024-11-20T09:01:19.181Z] Total : 7465.50 29.16 0.00 0.00 0.00 0.00 0.00 00:07:40.262 00:07:40.262 true 00:07:40.520 09:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:40.520 09:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:40.783 09:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:40.783 09:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:40.783 09:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66662 00:07:41.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.041 Nvme0n1 : 3.00 7386.33 28.85 0.00 0.00 0.00 0.00 0.00 00:07:41.041 [2024-11-20T09:01:19.960Z] =================================================================================================================== 00:07:41.041 [2024-11-20T09:01:19.960Z] Total : 7386.33 28.85 0.00 0.00 0.00 0.00 0.00 00:07:41.041 00:07:42.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.415 Nvme0n1 : 4.00 7364.75 28.77 0.00 0.00 0.00 0.00 0.00 00:07:42.415 [2024-11-20T09:01:21.334Z] =================================================================================================================== 00:07:42.415 [2024-11-20T09:01:21.335Z] Total : 7364.75 28.77 0.00 0.00 0.00 0.00 0.00 00:07:42.416 00:07:43.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.350 Nvme0n1 : 5.00 7286.40 28.46 0.00 0.00 0.00 0.00 0.00 00:07:43.350 [2024-11-20T09:01:22.269Z] =================================================================================================================== 00:07:43.350 [2024-11-20T09:01:22.269Z] Total : 7286.40 28.46 0.00 0.00 0.00 0.00 0.00 00:07:43.350 00:07:44.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.285 Nvme0n1 : 6.00 7283.17 28.45 0.00 0.00 0.00 0.00 0.00 00:07:44.285 [2024-11-20T09:01:23.204Z] =================================================================================================================== 00:07:44.285 [2024-11-20T09:01:23.204Z] Total : 7283.17 28.45 0.00 0.00 0.00 0.00 0.00 00:07:44.285 00:07:45.249 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.250 Nvme0n1 : 7.00 7253.43 28.33 0.00 0.00 0.00 0.00 0.00 00:07:45.250 [2024-11-20T09:01:24.169Z] =================================================================================================================== 00:07:45.250 [2024-11-20T09:01:24.169Z] Total : 7253.43 28.33 0.00 0.00 0.00 0.00 0.00 00:07:45.250 00:07:46.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.184 Nvme0n1 : 8.00 7302.62 28.53 0.00 0.00 0.00 0.00 0.00 00:07:46.184 [2024-11-20T09:01:25.103Z] =================================================================================================================== 00:07:46.184 [2024-11-20T09:01:25.103Z] Total : 7302.62 28.53 0.00 0.00 0.00 0.00 0.00 00:07:46.184 00:07:47.118 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.118 Nvme0n1 : 9.00 7321.33 28.60 0.00 0.00 0.00 0.00 0.00 00:07:47.118 [2024-11-20T09:01:26.037Z] =================================================================================================================== 00:07:47.118 [2024-11-20T09:01:26.037Z] Total : 7321.33 28.60 0.00 0.00 0.00 0.00 0.00 00:07:47.118 00:07:48.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.053 Nvme0n1 : 10.00 7329.20 28.63 0.00 0.00 0.00 0.00 0.00 00:07:48.053 [2024-11-20T09:01:26.972Z] =================================================================================================================== 00:07:48.053 [2024-11-20T09:01:26.972Z] Total : 7329.20 28.63 0.00 0.00 0.00 0.00 0.00 00:07:48.053 00:07:48.053 00:07:48.053 Latency(us) 00:07:48.053 [2024-11-20T09:01:26.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.053 Nvme0n1 : 10.01 7331.62 28.64 0.00 0.00 17454.70 7685.59 107717.35 00:07:48.053 [2024-11-20T09:01:26.972Z] =================================================================================================================== 00:07:48.053 [2024-11-20T09:01:26.972Z] Total : 7331.62 28.64 0.00 0.00 17454.70 7685.59 107717.35 00:07:48.053 { 00:07:48.053 "results": [ 00:07:48.053 { 00:07:48.053 "job": "Nvme0n1", 00:07:48.053 "core_mask": "0x2", 00:07:48.053 "workload": "randwrite", 00:07:48.053 "status": "finished", 00:07:48.053 "queue_depth": 128, 00:07:48.053 "io_size": 4096, 00:07:48.053 "runtime": 10.014158, 00:07:48.053 "iops": 7331.619892556119, 00:07:48.053 "mibps": 28.63914020529734, 00:07:48.053 "io_failed": 0, 00:07:48.053 "io_timeout": 0, 00:07:48.053 "avg_latency_us": 17454.69993828781, 00:07:48.053 "min_latency_us": 7685.585454545455, 00:07:48.053 "max_latency_us": 107717.35272727272 00:07:48.053 } 00:07:48.053 ], 00:07:48.053 "core_count": 1 00:07:48.053 } 00:07:48.311 09:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66620 00:07:48.311 09:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66620 ']' 00:07:48.311 09:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66620 00:07:48.311 09:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:48.311 09:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.311 09:01:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66620 00:07:48.311 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:48.311 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:48.311 killing process with pid 66620 00:07:48.311 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66620' 00:07:48.311 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66620 00:07:48.311 Received shutdown signal, test time was about 10.000000 seconds 00:07:48.311 00:07:48.311 Latency(us) 00:07:48.311 [2024-11-20T09:01:27.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.311 [2024-11-20T09:01:27.230Z] =================================================================================================================== 00:07:48.311 [2024-11-20T09:01:27.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:48.311 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66620 00:07:48.311 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:48.570 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.136 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:49.137 09:01:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:49.395 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:49.395 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:49.395 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.654 [2024-11-20 09:01:28.373583] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:49.654 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:49.913 2024/11/20 09:01:28 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:3339e6b1-0656-49ff-97fc-2127e32a5f78], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:07:49.913 request: 00:07:49.913 { 00:07:49.913 "method": "bdev_lvol_get_lvstores", 00:07:49.913 "params": { 00:07:49.913 "uuid": "3339e6b1-0656-49ff-97fc-2127e32a5f78" 00:07:49.913 } 00:07:49.913 } 00:07:49.913 Got JSON-RPC error response 00:07:49.913 GoRPCClient: error on JSON-RPC call 00:07:49.913 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:49.913 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.913 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.913 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.913 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.171 aio_bdev 00:07:50.171 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4f0f8fd2-cdfa-420a-bd9e-33dcf544e400 00:07:50.171 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4f0f8fd2-cdfa-420a-bd9e-33dcf544e400 00:07:50.171 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.171 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:50.171 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.171 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.171 09:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:50.427 09:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4f0f8fd2-cdfa-420a-bd9e-33dcf544e400 -t 2000 00:07:50.994 [ 00:07:50.994 { 00:07:50.994 "aliases": [ 00:07:50.994 "lvs/lvol" 00:07:50.994 ], 00:07:50.994 "assigned_rate_limits": { 00:07:50.994 "r_mbytes_per_sec": 0, 00:07:50.994 "rw_ios_per_sec": 0, 00:07:50.994 "rw_mbytes_per_sec": 0, 00:07:50.994 "w_mbytes_per_sec": 0 00:07:50.994 }, 00:07:50.994 "block_size": 4096, 00:07:50.994 "claimed": false, 00:07:50.994 "driver_specific": { 00:07:50.994 "lvol": { 00:07:50.994 "base_bdev": "aio_bdev", 00:07:50.994 "clone": false, 00:07:50.994 "esnap_clone": false, 00:07:50.994 "lvol_store_uuid": "3339e6b1-0656-49ff-97fc-2127e32a5f78", 00:07:50.994 "num_allocated_clusters": 38, 00:07:50.994 "snapshot": false, 00:07:50.994 "thin_provision": false 00:07:50.994 } 00:07:50.994 }, 00:07:50.994 "name": "4f0f8fd2-cdfa-420a-bd9e-33dcf544e400", 00:07:50.994 "num_blocks": 38912, 00:07:50.994 "product_name": "Logical Volume", 00:07:50.994 "supported_io_types": { 00:07:50.994 "abort": false, 00:07:50.994 "compare": false, 00:07:50.994 "compare_and_write": false, 00:07:50.994 "copy": false, 00:07:50.994 "flush": false, 00:07:50.994 "get_zone_info": false, 00:07:50.994 "nvme_admin": false, 00:07:50.994 "nvme_io": false, 00:07:50.994 "nvme_io_md": false, 00:07:50.994 "nvme_iov_md": false, 00:07:50.994 "read": true, 00:07:50.994 "reset": true, 00:07:50.994 "seek_data": true, 00:07:50.994 "seek_hole": true, 00:07:50.994 "unmap": true, 00:07:50.994 "write": true, 00:07:50.994 "write_zeroes": true, 00:07:50.994 "zcopy": false, 00:07:50.994 "zone_append": false, 00:07:50.994 "zone_management": false 00:07:50.994 }, 00:07:50.994 "uuid": "4f0f8fd2-cdfa-420a-bd9e-33dcf544e400", 00:07:50.994 "zoned": false 00:07:50.994 } 00:07:50.994 ] 00:07:50.994 09:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:50.994 09:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:50.994 09:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:51.252 09:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:51.252 09:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:51.252 09:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:51.510 09:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:51.510 09:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4f0f8fd2-cdfa-420a-bd9e-33dcf544e400 00:07:52.077 09:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3339e6b1-0656-49ff-97fc-2127e32a5f78 00:07:52.335 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.593 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:53.160 ************************************ 00:07:53.160 END TEST lvs_grow_clean 00:07:53.160 ************************************ 00:07:53.160 00:07:53.160 real 0m19.798s 00:07:53.160 user 0m19.179s 00:07:53.160 sys 0m2.353s 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.160 ************************************ 00:07:53.160 START TEST lvs_grow_dirty 00:07:53.160 ************************************ 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:53.160 09:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.418 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:53.418 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:53.985 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:07:53.985 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:07:53.985 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:53.985 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:53.985 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:53.985 09:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 lvol 150 00:07:54.552 09:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f37f1d97-7312-48b5-8e74-a231f1cca06f 00:07:54.552 09:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:54.552 09:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:54.811 [2024-11-20 09:01:33.517826] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:54.811 [2024-11-20 09:01:33.517932] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:54.811 true 00:07:54.811 09:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:54.811 09:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:07:55.069 09:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:55.069 09:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.328 09:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f37f1d97-7312-48b5-8e74-a231f1cca06f 00:07:55.586 09:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.844 [2024-11-20 09:01:34.702463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.844 09:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=67080 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 67080 /var/tmp/bdevperf.sock 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67080 ']' 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.411 09:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.411 [2024-11-20 09:01:35.169201] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:56.411 [2024-11-20 09:01:35.169340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67080 ] 00:07:56.411 [2024-11-20 09:01:35.321933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.669 [2024-11-20 09:01:35.390975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.693 09:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.693 09:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:57.693 09:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:57.693 Nvme0n1 00:07:57.951 09:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:58.209 [ 00:07:58.209 { 00:07:58.209 "aliases": [ 00:07:58.209 "f37f1d97-7312-48b5-8e74-a231f1cca06f" 00:07:58.209 ], 00:07:58.209 "assigned_rate_limits": { 00:07:58.209 "r_mbytes_per_sec": 0, 00:07:58.209 "rw_ios_per_sec": 0, 00:07:58.209 "rw_mbytes_per_sec": 0, 00:07:58.209 "w_mbytes_per_sec": 0 00:07:58.209 }, 00:07:58.209 "block_size": 4096, 00:07:58.209 "claimed": false, 00:07:58.209 "driver_specific": { 00:07:58.209 "mp_policy": "active_passive", 00:07:58.209 "nvme": [ 00:07:58.209 { 00:07:58.209 "ctrlr_data": { 00:07:58.209 "ana_reporting": false, 00:07:58.209 "cntlid": 1, 00:07:58.209 "firmware_revision": "25.01", 00:07:58.209 "model_number": "SPDK bdev Controller", 00:07:58.209 "multi_ctrlr": true, 00:07:58.209 "oacs": { 00:07:58.209 "firmware": 0, 00:07:58.209 "format": 0, 00:07:58.209 "ns_manage": 0, 00:07:58.209 "security": 0 00:07:58.209 }, 00:07:58.209 "serial_number": "SPDK0", 00:07:58.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.209 "vendor_id": "0x8086" 00:07:58.209 }, 00:07:58.209 "ns_data": { 00:07:58.209 "can_share": true, 00:07:58.209 "id": 1 00:07:58.209 }, 00:07:58.209 "trid": { 00:07:58.209 "adrfam": "IPv4", 00:07:58.209 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:58.209 "traddr": "10.0.0.2", 00:07:58.209 "trsvcid": "4420", 00:07:58.209 "trtype": "TCP" 00:07:58.209 }, 00:07:58.209 "vs": { 00:07:58.209 "nvme_version": "1.3" 00:07:58.209 } 00:07:58.209 } 00:07:58.209 ] 00:07:58.209 }, 00:07:58.209 "memory_domains": [ 00:07:58.209 { 00:07:58.209 "dma_device_id": "system", 00:07:58.209 "dma_device_type": 1 00:07:58.209 } 00:07:58.209 ], 00:07:58.210 "name": "Nvme0n1", 00:07:58.210 "num_blocks": 38912, 00:07:58.210 "numa_id": -1, 00:07:58.210 "product_name": "NVMe disk", 00:07:58.210 "supported_io_types": { 00:07:58.210 "abort": true, 00:07:58.210 "compare": true, 00:07:58.210 "compare_and_write": true, 00:07:58.210 "copy": true, 00:07:58.210 "flush": true, 00:07:58.210 "get_zone_info": false, 00:07:58.210 "nvme_admin": true, 00:07:58.210 "nvme_io": true, 00:07:58.210 "nvme_io_md": false, 00:07:58.210 "nvme_iov_md": false, 00:07:58.210 "read": true, 00:07:58.210 "reset": true, 00:07:58.210 "seek_data": false, 00:07:58.210 "seek_hole": false, 00:07:58.210 "unmap": true, 00:07:58.210 "write": true, 00:07:58.210 "write_zeroes": true, 00:07:58.210 "zcopy": false, 00:07:58.210 "zone_append": false, 00:07:58.210 "zone_management": false 00:07:58.210 }, 00:07:58.210 "uuid": "f37f1d97-7312-48b5-8e74-a231f1cca06f", 00:07:58.210 "zoned": false 00:07:58.210 } 00:07:58.210 ] 00:07:58.210 09:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=67133 00:07:58.210 09:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:58.210 09:01:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:58.210 Running I/O for 10 seconds... 00:07:59.583 Latency(us) 00:07:59.583 [2024-11-20T09:01:38.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.583 Nvme0n1 : 1.00 7575.00 29.59 0.00 0.00 0.00 0.00 0.00 00:07:59.583 [2024-11-20T09:01:38.502Z] =================================================================================================================== 00:07:59.583 [2024-11-20T09:01:38.502Z] Total : 7575.00 29.59 0.00 0.00 0.00 0.00 0.00 00:07:59.583 00:08:00.149 09:01:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:00.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.408 Nvme0n1 : 2.00 7570.50 29.57 0.00 0.00 0.00 0.00 0.00 00:08:00.408 [2024-11-20T09:01:39.327Z] =================================================================================================================== 00:08:00.408 [2024-11-20T09:01:39.327Z] Total : 7570.50 29.57 0.00 0.00 0.00 0.00 0.00 00:08:00.408 00:08:00.408 true 00:08:00.408 09:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:00.408 09:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:00.975 09:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:00.975 09:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:00.975 09:01:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 67133 00:08:01.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.233 Nvme0n1 : 3.00 7465.33 29.16 0.00 0.00 0.00 0.00 0.00 00:08:01.233 [2024-11-20T09:01:40.152Z] =================================================================================================================== 00:08:01.233 [2024-11-20T09:01:40.152Z] Total : 7465.33 29.16 0.00 0.00 0.00 0.00 0.00 00:08:01.233 00:08:02.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.609 Nvme0n1 : 4.00 7408.25 28.94 0.00 0.00 0.00 0.00 0.00 00:08:02.609 [2024-11-20T09:01:41.528Z] =================================================================================================================== 00:08:02.609 [2024-11-20T09:01:41.528Z] Total : 7408.25 28.94 0.00 0.00 0.00 0.00 0.00 00:08:02.609 00:08:03.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.545 Nvme0n1 : 5.00 7264.20 28.38 0.00 0.00 0.00 0.00 0.00 00:08:03.545 [2024-11-20T09:01:42.464Z] =================================================================================================================== 00:08:03.545 [2024-11-20T09:01:42.464Z] Total : 7264.20 28.38 0.00 0.00 0.00 0.00 0.00 00:08:03.545 00:08:04.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.480 Nvme0n1 : 6.00 7249.67 28.32 0.00 0.00 0.00 0.00 0.00 00:08:04.480 [2024-11-20T09:01:43.399Z] =================================================================================================================== 00:08:04.480 [2024-11-20T09:01:43.399Z] Total : 7249.67 28.32 0.00 0.00 0.00 0.00 0.00 00:08:04.480 00:08:05.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.414 Nvme0n1 : 7.00 7299.71 28.51 0.00 0.00 0.00 0.00 0.00 00:08:05.414 [2024-11-20T09:01:44.333Z] =================================================================================================================== 00:08:05.414 [2024-11-20T09:01:44.333Z] Total : 7299.71 28.51 0.00 0.00 0.00 0.00 0.00 00:08:05.414 00:08:06.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.349 Nvme0n1 : 8.00 7331.38 28.64 0.00 0.00 0.00 0.00 0.00 00:08:06.349 [2024-11-20T09:01:45.268Z] =================================================================================================================== 00:08:06.349 [2024-11-20T09:01:45.268Z] Total : 7331.38 28.64 0.00 0.00 0.00 0.00 0.00 00:08:06.349 00:08:07.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.284 Nvme0n1 : 9.00 7329.67 28.63 0.00 0.00 0.00 0.00 0.00 00:08:07.284 [2024-11-20T09:01:46.203Z] =================================================================================================================== 00:08:07.284 [2024-11-20T09:01:46.203Z] Total : 7329.67 28.63 0.00 0.00 0.00 0.00 0.00 00:08:07.284 00:08:08.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.340 Nvme0n1 : 10.00 7308.20 28.55 0.00 0.00 0.00 0.00 0.00 00:08:08.340 [2024-11-20T09:01:47.259Z] =================================================================================================================== 00:08:08.340 [2024-11-20T09:01:47.259Z] Total : 7308.20 28.55 0.00 0.00 0.00 0.00 0.00 00:08:08.340 00:08:08.340 00:08:08.340 Latency(us) 00:08:08.341 [2024-11-20T09:01:47.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.341 Nvme0n1 : 10.02 7309.50 28.55 0.00 0.00 17495.96 4438.57 140127.88 00:08:08.341 [2024-11-20T09:01:47.260Z] =================================================================================================================== 00:08:08.341 [2024-11-20T09:01:47.260Z] Total : 7309.50 28.55 0.00 0.00 17495.96 4438.57 140127.88 00:08:08.341 { 00:08:08.341 "results": [ 00:08:08.341 { 00:08:08.341 "job": "Nvme0n1", 00:08:08.341 "core_mask": "0x2", 00:08:08.341 "workload": "randwrite", 00:08:08.341 "status": "finished", 00:08:08.341 "queue_depth": 128, 00:08:08.341 "io_size": 4096, 00:08:08.341 "runtime": 10.015732, 00:08:08.341 "iops": 7309.500693508971, 00:08:08.341 "mibps": 28.55273708401942, 00:08:08.341 "io_failed": 0, 00:08:08.341 "io_timeout": 0, 00:08:08.341 "avg_latency_us": 17495.95746870149, 00:08:08.341 "min_latency_us": 4438.574545454546, 00:08:08.341 "max_latency_us": 140127.88363636364 00:08:08.341 } 00:08:08.341 ], 00:08:08.341 "core_count": 1 00:08:08.341 } 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 67080 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 67080 ']' 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 67080 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67080 00:08:08.341 killing process with pid 67080 00:08:08.341 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.341 00:08:08.341 Latency(us) 00:08:08.341 [2024-11-20T09:01:47.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.341 [2024-11-20T09:01:47.260Z] =================================================================================================================== 00:08:08.341 [2024-11-20T09:01:47.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67080' 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 67080 00:08:08.341 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 67080 00:08:08.598 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.857 09:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:09.423 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:09.423 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66460 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66460 00:08:09.682 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66460 Killed "${NVMF_APP[@]}" "$@" 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=67301 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 67301 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 67301 ']' 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.682 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.682 [2024-11-20 09:01:48.522901] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:09.682 [2024-11-20 09:01:48.523052] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.940 [2024-11-20 09:01:48.671088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.940 [2024-11-20 09:01:48.734715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.940 [2024-11-20 09:01:48.734789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.940 [2024-11-20 09:01:48.734802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.940 [2024-11-20 09:01:48.734811] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.940 [2024-11-20 09:01:48.734819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.940 [2024-11-20 09:01:48.735232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.199 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.199 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:10.199 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:10.199 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:10.199 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.199 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.199 09:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.456 [2024-11-20 09:01:49.236846] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:10.456 [2024-11-20 09:01:49.237307] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:10.456 [2024-11-20 09:01:49.237515] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f37f1d97-7312-48b5-8e74-a231f1cca06f 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f37f1d97-7312-48b5-8e74-a231f1cca06f 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.456 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.714 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f37f1d97-7312-48b5-8e74-a231f1cca06f -t 2000 00:08:11.280 [ 00:08:11.280 { 00:08:11.280 "aliases": [ 00:08:11.280 "lvs/lvol" 00:08:11.280 ], 00:08:11.280 "assigned_rate_limits": { 00:08:11.280 "r_mbytes_per_sec": 0, 00:08:11.280 "rw_ios_per_sec": 0, 00:08:11.280 "rw_mbytes_per_sec": 0, 00:08:11.280 "w_mbytes_per_sec": 0 00:08:11.280 }, 00:08:11.280 "block_size": 4096, 00:08:11.280 "claimed": false, 00:08:11.280 "driver_specific": { 00:08:11.280 "lvol": { 00:08:11.280 "base_bdev": "aio_bdev", 00:08:11.280 "clone": false, 00:08:11.280 "esnap_clone": false, 00:08:11.280 "lvol_store_uuid": "20d5bc8d-a6de-415a-b977-9f274b18d4c5", 00:08:11.280 "num_allocated_clusters": 38, 00:08:11.280 "snapshot": false, 00:08:11.280 "thin_provision": false 00:08:11.280 } 00:08:11.280 }, 00:08:11.280 "name": "f37f1d97-7312-48b5-8e74-a231f1cca06f", 00:08:11.280 "num_blocks": 38912, 00:08:11.280 "product_name": "Logical Volume", 00:08:11.280 "supported_io_types": { 00:08:11.280 "abort": false, 00:08:11.280 "compare": false, 00:08:11.280 "compare_and_write": false, 00:08:11.280 "copy": false, 00:08:11.280 "flush": false, 00:08:11.280 "get_zone_info": false, 00:08:11.280 "nvme_admin": false, 00:08:11.280 "nvme_io": false, 00:08:11.280 "nvme_io_md": false, 00:08:11.280 "nvme_iov_md": false, 00:08:11.280 "read": true, 00:08:11.280 "reset": true, 00:08:11.280 "seek_data": true, 00:08:11.280 "seek_hole": true, 00:08:11.280 "unmap": true, 00:08:11.280 "write": true, 00:08:11.280 "write_zeroes": true, 00:08:11.280 "zcopy": false, 00:08:11.280 "zone_append": false, 00:08:11.280 "zone_management": false 00:08:11.280 }, 00:08:11.280 "uuid": "f37f1d97-7312-48b5-8e74-a231f1cca06f", 00:08:11.280 "zoned": false 00:08:11.280 } 00:08:11.280 ] 00:08:11.280 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:11.280 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:11.280 09:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:11.539 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:11.539 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:11.539 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:11.798 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:11.798 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.056 [2024-11-20 09:01:50.941531] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:12.315 09:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:12.573 2024/11/20 09:01:51 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:20d5bc8d-a6de-415a-b977-9f274b18d4c5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:12.573 request: 00:08:12.573 { 00:08:12.573 "method": "bdev_lvol_get_lvstores", 00:08:12.573 "params": { 00:08:12.573 "uuid": "20d5bc8d-a6de-415a-b977-9f274b18d4c5" 00:08:12.573 } 00:08:12.573 } 00:08:12.573 Got JSON-RPC error response 00:08:12.573 GoRPCClient: error on JSON-RPC call 00:08:12.573 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:12.573 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:12.573 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:12.573 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:12.573 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.831 aio_bdev 00:08:12.831 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f37f1d97-7312-48b5-8e74-a231f1cca06f 00:08:12.831 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f37f1d97-7312-48b5-8e74-a231f1cca06f 00:08:12.831 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.831 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:12.831 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.831 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.831 09:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:13.395 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f37f1d97-7312-48b5-8e74-a231f1cca06f -t 2000 00:08:13.395 [ 00:08:13.395 { 00:08:13.395 "aliases": [ 00:08:13.395 "lvs/lvol" 00:08:13.395 ], 00:08:13.395 "assigned_rate_limits": { 00:08:13.395 "r_mbytes_per_sec": 0, 00:08:13.395 "rw_ios_per_sec": 0, 00:08:13.395 "rw_mbytes_per_sec": 0, 00:08:13.395 "w_mbytes_per_sec": 0 00:08:13.395 }, 00:08:13.395 "block_size": 4096, 00:08:13.395 "claimed": false, 00:08:13.395 "driver_specific": { 00:08:13.395 "lvol": { 00:08:13.395 "base_bdev": "aio_bdev", 00:08:13.395 "clone": false, 00:08:13.395 "esnap_clone": false, 00:08:13.395 "lvol_store_uuid": "20d5bc8d-a6de-415a-b977-9f274b18d4c5", 00:08:13.395 "num_allocated_clusters": 38, 00:08:13.395 "snapshot": false, 00:08:13.395 "thin_provision": false 00:08:13.395 } 00:08:13.395 }, 00:08:13.395 "name": "f37f1d97-7312-48b5-8e74-a231f1cca06f", 00:08:13.395 "num_blocks": 38912, 00:08:13.395 "product_name": "Logical Volume", 00:08:13.395 "supported_io_types": { 00:08:13.395 "abort": false, 00:08:13.395 "compare": false, 00:08:13.395 "compare_and_write": false, 00:08:13.395 "copy": false, 00:08:13.395 "flush": false, 00:08:13.395 "get_zone_info": false, 00:08:13.395 "nvme_admin": false, 00:08:13.395 "nvme_io": false, 00:08:13.395 "nvme_io_md": false, 00:08:13.395 "nvme_iov_md": false, 00:08:13.395 "read": true, 00:08:13.395 "reset": true, 00:08:13.395 "seek_data": true, 00:08:13.395 "seek_hole": true, 00:08:13.395 "unmap": true, 00:08:13.395 "write": true, 00:08:13.396 "write_zeroes": true, 00:08:13.396 "zcopy": false, 00:08:13.396 "zone_append": false, 00:08:13.396 "zone_management": false 00:08:13.396 }, 00:08:13.396 "uuid": "f37f1d97-7312-48b5-8e74-a231f1cca06f", 00:08:13.396 "zoned": false 00:08:13.396 } 00:08:13.396 ] 00:08:13.653 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:13.653 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:13.653 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:13.911 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:13.911 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:13.911 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:14.170 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:14.170 09:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f37f1d97-7312-48b5-8e74-a231f1cca06f 00:08:14.429 09:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 20d5bc8d-a6de-415a-b977-9f274b18d4c5 00:08:14.995 09:01:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:15.254 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:15.821 ************************************ 00:08:15.821 END TEST lvs_grow_dirty 00:08:15.821 ************************************ 00:08:15.821 00:08:15.821 real 0m22.539s 00:08:15.821 user 0m47.132s 00:08:15.821 sys 0m8.065s 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:15.821 nvmf_trace.0 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:15.821 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:16.095 rmmod nvme_tcp 00:08:16.095 rmmod nvme_fabrics 00:08:16.095 rmmod nvme_keyring 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 67301 ']' 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 67301 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 67301 ']' 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 67301 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67301 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.095 killing process with pid 67301 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67301' 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 67301 00:08:16.095 09:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 67301 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:08:16.384 00:08:16.384 real 0m44.528s 00:08:16.384 user 1m13.595s 00:08:16.384 sys 0m11.190s 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.384 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.384 ************************************ 00:08:16.384 END TEST nvmf_lvs_grow 00:08:16.384 ************************************ 00:08:16.385 09:01:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:16.385 09:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.385 09:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.385 09:01:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.385 ************************************ 00:08:16.385 START TEST nvmf_bdev_io_wait 00:08:16.385 ************************************ 00:08:16.385 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:16.644 * Looking for test storage... 00:08:16.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:16.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.644 --rc genhtml_branch_coverage=1 00:08:16.644 --rc genhtml_function_coverage=1 00:08:16.644 --rc genhtml_legend=1 00:08:16.644 --rc geninfo_all_blocks=1 00:08:16.644 --rc geninfo_unexecuted_blocks=1 00:08:16.644 00:08:16.644 ' 00:08:16.644 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:16.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.644 --rc genhtml_branch_coverage=1 00:08:16.644 --rc genhtml_function_coverage=1 00:08:16.644 --rc genhtml_legend=1 00:08:16.644 --rc geninfo_all_blocks=1 00:08:16.644 --rc geninfo_unexecuted_blocks=1 00:08:16.644 00:08:16.644 ' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:16.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.645 --rc genhtml_branch_coverage=1 00:08:16.645 --rc genhtml_function_coverage=1 00:08:16.645 --rc genhtml_legend=1 00:08:16.645 --rc geninfo_all_blocks=1 00:08:16.645 --rc geninfo_unexecuted_blocks=1 00:08:16.645 00:08:16.645 ' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:16.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.645 --rc genhtml_branch_coverage=1 00:08:16.645 --rc genhtml_function_coverage=1 00:08:16.645 --rc genhtml_legend=1 00:08:16.645 --rc geninfo_all_blocks=1 00:08:16.645 --rc geninfo_unexecuted_blocks=1 00:08:16.645 00:08:16.645 ' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:16.645 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@280 -- # nvmf_veth_init 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@223 -- # create_target_ns 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # create_main_bridge 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@105 -- # delete_main_bridge 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:16.645 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator0 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target0 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0 up 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target0_br 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:16.646 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target0 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:08:16.906 10.0.0.1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:08:16.906 10.0.0.2 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator0 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target0_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1 up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target1_br 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772163 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:08:16.906 10.0.0.3 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772164 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:08:16.906 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:08:16.907 10.0.0.4 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator1 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target1_br 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 2 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:16.907 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:17.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:08:17.167 00:08:17.167 --- 10.0.0.1 ping statistics --- 00:08:17.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.167 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:17.167 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:08:17.167 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:17.167 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:17.167 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:17.167 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:17.167 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:08:17.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:08:17.168 00:08:17.168 --- 10.0.0.2 ping statistics --- 00:08:17.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.168 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:08:17.168 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:17.168 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:08:17.168 00:08:17.168 --- 10.0.0.3 ping statistics --- 00:08:17.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.168 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:08:17.168 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:17.168 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.106 ms 00:08:17.168 00:08:17.168 --- 10.0.0.4 ping statistics --- 00:08:17.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.168 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # return 0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:17.168 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=67773 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 67773 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67773 ']' 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.169 09:01:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.169 [2024-11-20 09:01:56.030105] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:17.169 [2024-11-20 09:01:56.030196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.428 [2024-11-20 09:01:56.175092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.428 [2024-11-20 09:01:56.241386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.428 [2024-11-20 09:01:56.241443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.428 [2024-11-20 09:01:56.241455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.428 [2024-11-20 09:01:56.241464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.428 [2024-11-20 09:01:56.241471] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.428 [2024-11-20 09:01:56.242947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.428 [2024-11-20 09:01:56.243023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.428 [2024-11-20 09:01:56.243166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.428 [2024-11-20 09:01:56.243172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 [2024-11-20 09:01:57.191173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 Malloc0 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.364 [2024-11-20 09:01:57.242250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67826 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67828 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.364 { 00:08:18.364 "params": { 00:08:18.364 "name": "Nvme$subsystem", 00:08:18.364 "trtype": "$TEST_TRANSPORT", 00:08:18.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.364 "adrfam": "ipv4", 00:08:18.364 "trsvcid": "$NVMF_PORT", 00:08:18.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.364 "hdgst": ${hdgst:-false}, 00:08:18.364 "ddgst": ${ddgst:-false} 00:08:18.364 }, 00:08:18.364 "method": "bdev_nvme_attach_controller" 00:08:18.364 } 00:08:18.364 EOF 00:08:18.364 )") 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67830 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.364 { 00:08:18.364 "params": { 00:08:18.364 "name": "Nvme$subsystem", 00:08:18.364 "trtype": "$TEST_TRANSPORT", 00:08:18.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.364 "adrfam": "ipv4", 00:08:18.364 "trsvcid": "$NVMF_PORT", 00:08:18.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.364 "hdgst": ${hdgst:-false}, 00:08:18.364 "ddgst": ${ddgst:-false} 00:08:18.364 }, 00:08:18.364 "method": "bdev_nvme_attach_controller" 00:08:18.364 } 00:08:18.364 EOF 00:08:18.364 )") 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.364 { 00:08:18.364 "params": { 00:08:18.364 "name": "Nvme$subsystem", 00:08:18.364 "trtype": "$TEST_TRANSPORT", 00:08:18.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.364 "adrfam": "ipv4", 00:08:18.364 "trsvcid": "$NVMF_PORT", 00:08:18.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.364 "hdgst": ${hdgst:-false}, 00:08:18.364 "ddgst": ${ddgst:-false} 00:08:18.364 }, 00:08:18.364 "method": "bdev_nvme_attach_controller" 00:08:18.364 } 00:08:18.364 EOF 00:08:18.364 )") 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.364 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67838 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.365 "params": { 00:08:18.365 "name": "Nvme1", 00:08:18.365 "trtype": "tcp", 00:08:18.365 "traddr": "10.0.0.2", 00:08:18.365 "adrfam": "ipv4", 00:08:18.365 "trsvcid": "4420", 00:08:18.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.365 "hdgst": false, 00:08:18.365 "ddgst": false 00:08:18.365 }, 00:08:18.365 "method": "bdev_nvme_attach_controller" 00:08:18.365 }' 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.365 { 00:08:18.365 "params": { 00:08:18.365 "name": "Nvme$subsystem", 00:08:18.365 "trtype": "$TEST_TRANSPORT", 00:08:18.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.365 "adrfam": "ipv4", 00:08:18.365 "trsvcid": "$NVMF_PORT", 00:08:18.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.365 "hdgst": ${hdgst:-false}, 00:08:18.365 "ddgst": ${ddgst:-false} 00:08:18.365 }, 00:08:18.365 "method": "bdev_nvme_attach_controller" 00:08:18.365 } 00:08:18.365 EOF 00:08:18.365 )") 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.365 "params": { 00:08:18.365 "name": "Nvme1", 00:08:18.365 "trtype": "tcp", 00:08:18.365 "traddr": "10.0.0.2", 00:08:18.365 "adrfam": "ipv4", 00:08:18.365 "trsvcid": "4420", 00:08:18.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.365 "hdgst": false, 00:08:18.365 "ddgst": false 00:08:18.365 }, 00:08:18.365 "method": "bdev_nvme_attach_controller" 00:08:18.365 }' 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.365 "params": { 00:08:18.365 "name": "Nvme1", 00:08:18.365 "trtype": "tcp", 00:08:18.365 "traddr": "10.0.0.2", 00:08:18.365 "adrfam": "ipv4", 00:08:18.365 "trsvcid": "4420", 00:08:18.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.365 "hdgst": false, 00:08:18.365 "ddgst": false 00:08:18.365 }, 00:08:18.365 "method": "bdev_nvme_attach_controller" 00:08:18.365 }' 00:08:18.365 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.365 "params": { 00:08:18.365 "name": "Nvme1", 00:08:18.365 "trtype": "tcp", 00:08:18.365 "traddr": "10.0.0.2", 00:08:18.365 "adrfam": "ipv4", 00:08:18.365 "trsvcid": "4420", 00:08:18.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.365 "hdgst": false, 00:08:18.365 "ddgst": false 00:08:18.365 }, 00:08:18.365 "method": "bdev_nvme_attach_controller" 00:08:18.365 }' 00:08:18.623 [2024-11-20 09:01:57.317781] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:18.623 [2024-11-20 09:01:57.317873] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:18.623 [2024-11-20 09:01:57.320516] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:18.624 [2024-11-20 09:01:57.320620] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:18.624 09:01:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67826 00:08:18.624 [2024-11-20 09:01:57.327633] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:18.624 [2024-11-20 09:01:57.328158] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:18.624 [2024-11-20 09:01:57.359520] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:18.624 [2024-11-20 09:01:57.359647] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:18.624 [2024-11-20 09:01:57.528559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.882 [2024-11-20 09:01:57.580820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:18.882 [2024-11-20 09:01:57.610674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.882 [2024-11-20 09:01:57.671145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:18.882 [2024-11-20 09:01:57.689068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.882 [2024-11-20 09:01:57.747246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:18.882 Running I/O for 1 seconds... 00:08:18.882 [2024-11-20 09:01:57.766249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.140 Running I/O for 1 seconds... 00:08:19.140 [2024-11-20 09:01:57.828263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:19.140 Running I/O for 1 seconds... 00:08:19.140 Running I/O for 1 seconds... 00:08:20.074 9181.00 IOPS, 35.86 MiB/s 00:08:20.074 Latency(us) 00:08:20.074 [2024-11-20T09:01:58.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.074 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:20.074 Nvme1n1 : 1.01 9221.31 36.02 0.00 0.00 13808.43 6583.39 17158.52 00:08:20.074 [2024-11-20T09:01:58.993Z] =================================================================================================================== 00:08:20.074 [2024-11-20T09:01:58.993Z] Total : 9221.31 36.02 0.00 0.00 13808.43 6583.39 17158.52 00:08:20.074 7956.00 IOPS, 31.08 MiB/s 00:08:20.074 Latency(us) 00:08:20.074 [2024-11-20T09:01:58.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.074 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:20.074 Nvme1n1 : 1.01 8014.77 31.31 0.00 0.00 15884.77 4796.04 22997.18 00:08:20.074 [2024-11-20T09:01:58.993Z] =================================================================================================================== 00:08:20.074 [2024-11-20T09:01:58.993Z] Total : 8014.77 31.31 0.00 0.00 15884.77 4796.04 22997.18 00:08:20.074 8931.00 IOPS, 34.89 MiB/s 00:08:20.074 Latency(us) 00:08:20.074 [2024-11-20T09:01:58.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.074 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:20.074 Nvme1n1 : 1.01 9008.04 35.19 0.00 0.00 14158.71 2770.39 20852.36 00:08:20.074 [2024-11-20T09:01:58.993Z] =================================================================================================================== 00:08:20.074 [2024-11-20T09:01:58.993Z] Total : 9008.04 35.19 0.00 0.00 14158.71 2770.39 20852.36 00:08:20.074 09:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67828 00:08:20.074 09:01:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67830 00:08:20.074 192544.00 IOPS, 752.12 MiB/s 00:08:20.074 Latency(us) 00:08:20.074 [2024-11-20T09:01:58.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.074 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:20.075 Nvme1n1 : 1.00 192147.15 750.57 0.00 0.00 662.37 309.06 2427.81 00:08:20.075 [2024-11-20T09:01:58.994Z] =================================================================================================================== 00:08:20.075 [2024-11-20T09:01:58.994Z] Total : 192147.15 750.57 0.00 0.00 662.37 309.06 2427.81 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67838 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:20.333 rmmod nvme_tcp 00:08:20.333 rmmod nvme_fabrics 00:08:20.333 rmmod nvme_keyring 00:08:20.333 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 67773 ']' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 67773 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67773 ']' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67773 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67773 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.590 killing process with pid 67773 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67773' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67773 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67773 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:08:20.590 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:08:20.848 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:20.848 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:08:20.848 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:20.848 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:08:20.848 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:08:20.848 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:20.848 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:08:20.849 00:08:20.849 real 0m4.340s 00:08:20.849 user 0m17.640s 00:08:20.849 sys 0m2.275s 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.849 ************************************ 00:08:20.849 END TEST nvmf_bdev_io_wait 00:08:20.849 ************************************ 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.849 ************************************ 00:08:20.849 START TEST nvmf_queue_depth 00:08:20.849 ************************************ 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:20.849 * Looking for test storage... 00:08:20.849 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.849 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.108 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.108 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.108 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.108 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.108 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.108 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.108 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.109 --rc genhtml_branch_coverage=1 00:08:21.109 --rc genhtml_function_coverage=1 00:08:21.109 --rc genhtml_legend=1 00:08:21.109 --rc geninfo_all_blocks=1 00:08:21.109 --rc geninfo_unexecuted_blocks=1 00:08:21.109 00:08:21.109 ' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.109 --rc genhtml_branch_coverage=1 00:08:21.109 --rc genhtml_function_coverage=1 00:08:21.109 --rc genhtml_legend=1 00:08:21.109 --rc geninfo_all_blocks=1 00:08:21.109 --rc geninfo_unexecuted_blocks=1 00:08:21.109 00:08:21.109 ' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.109 --rc genhtml_branch_coverage=1 00:08:21.109 --rc genhtml_function_coverage=1 00:08:21.109 --rc genhtml_legend=1 00:08:21.109 --rc geninfo_all_blocks=1 00:08:21.109 --rc geninfo_unexecuted_blocks=1 00:08:21.109 00:08:21.109 ' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.109 --rc genhtml_branch_coverage=1 00:08:21.109 --rc genhtml_function_coverage=1 00:08:21.109 --rc genhtml_legend=1 00:08:21.109 --rc geninfo_all_blocks=1 00:08:21.109 --rc geninfo_unexecuted_blocks=1 00:08:21.109 00:08:21.109 ' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:21.109 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.109 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@280 -- # nvmf_veth_init 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@223 -- # create_target_ns 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # create_main_bridge 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@105 -- # delete_main_bridge 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator0 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target0 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0 up 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target0_br 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target0 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:08:21.110 10.0.0.1 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:08:21.110 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:08:21.110 10.0.0.2 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator0 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:08:21.111 09:01:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target0_br 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:08:21.111 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:08:21.370 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:08:21.370 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:08:21.370 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:21.370 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:21.370 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1 up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772163 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:08:21.371 10.0.0.3 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772164 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:08:21.371 10.0.0.4 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target1_br 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 2 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:21.371 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:21.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.372 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:08:21.372 00:08:21.372 --- 10.0.0.1 ping statistics --- 00:08:21.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.372 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:08:21.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:08:21.372 00:08:21.372 --- 10.0.0.2 ping statistics --- 00:08:21.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.372 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:08:21.372 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:21.372 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:08:21.372 00:08:21.372 --- 10.0.0.3 ping statistics --- 00:08:21.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.372 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:08:21.372 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:21.372 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:08:21.372 00:08:21.372 --- 10.0.0.4 ping statistics --- 00:08:21.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.372 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # return 0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:08:21.372 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:21.373 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:21.631 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=68114 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 68114 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68114 ']' 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.632 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.632 [2024-11-20 09:02:00.400170] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:21.632 [2024-11-20 09:02:00.400307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.890 [2024-11-20 09:02:00.561851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.890 [2024-11-20 09:02:00.632024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.890 [2024-11-20 09:02:00.632093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.890 [2024-11-20 09:02:00.632107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.890 [2024-11-20 09:02:00.632117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.890 [2024-11-20 09:02:00.632126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.890 [2024-11-20 09:02:00.632612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.890 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:21.890 [2024-11-20 09:02:00.805749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.148 Malloc0 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.148 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.149 [2024-11-20 09:02:00.861148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=68145 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 68145 /var/tmp/bdevperf.sock 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 68145 ']' 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:22.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.149 09:02:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.149 [2024-11-20 09:02:00.919058] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:22.149 [2024-11-20 09:02:00.919153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68145 ] 00:08:22.149 [2024-11-20 09:02:01.060474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.407 [2024-11-20 09:02:01.124229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.407 09:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.407 09:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:22.407 09:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:22.407 09:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.407 09:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:22.407 NVMe0n1 00:08:22.407 09:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.407 09:02:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:22.665 Running I/O for 10 seconds... 00:08:24.563 7808.00 IOPS, 30.50 MiB/s [2024-11-20T09:02:04.857Z] 7903.00 IOPS, 30.87 MiB/s [2024-11-20T09:02:05.793Z] 8080.00 IOPS, 31.56 MiB/s [2024-11-20T09:02:06.727Z] 8024.25 IOPS, 31.34 MiB/s [2024-11-20T09:02:07.734Z] 7774.60 IOPS, 30.37 MiB/s [2024-11-20T09:02:08.670Z] 7794.50 IOPS, 30.45 MiB/s [2024-11-20T09:02:09.606Z] 7893.86 IOPS, 30.84 MiB/s [2024-11-20T09:02:10.544Z] 7932.00 IOPS, 30.98 MiB/s [2024-11-20T09:02:11.483Z] 8034.22 IOPS, 31.38 MiB/s [2024-11-20T09:02:11.742Z] 8069.90 IOPS, 31.52 MiB/s 00:08:32.823 Latency(us) 00:08:32.823 [2024-11-20T09:02:11.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.823 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:32.823 Verification LBA range: start 0x0 length 0x4000 00:08:32.823 NVMe0n1 : 10.09 8101.35 31.65 0.00 0.00 125872.06 28120.90 284068.77 00:08:32.823 [2024-11-20T09:02:11.742Z] =================================================================================================================== 00:08:32.823 [2024-11-20T09:02:11.742Z] Total : 8101.35 31.65 0.00 0.00 125872.06 28120.90 284068.77 00:08:32.823 { 00:08:32.823 "results": [ 00:08:32.823 { 00:08:32.823 "job": "NVMe0n1", 00:08:32.823 "core_mask": "0x1", 00:08:32.823 "workload": "verify", 00:08:32.823 "status": "finished", 00:08:32.823 "verify_range": { 00:08:32.823 "start": 0, 00:08:32.823 "length": 16384 00:08:32.823 }, 00:08:32.823 "queue_depth": 1024, 00:08:32.823 "io_size": 4096, 00:08:32.823 "runtime": 10.086956, 00:08:32.823 "iops": 8101.353867311407, 00:08:32.823 "mibps": 31.645913544185184, 00:08:32.823 "io_failed": 0, 00:08:32.823 "io_timeout": 0, 00:08:32.823 "avg_latency_us": 125872.0629805829, 00:08:32.823 "min_latency_us": 28120.901818181817, 00:08:32.823 "max_latency_us": 284068.77090909093 00:08:32.823 } 00:08:32.823 ], 00:08:32.823 "core_count": 1 00:08:32.823 } 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 68145 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68145 ']' 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68145 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68145 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.823 killing process with pid 68145 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68145' 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68145 00:08:32.823 Received shutdown signal, test time was about 10.000000 seconds 00:08:32.823 00:08:32.823 Latency(us) 00:08:32.823 [2024-11-20T09:02:11.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.823 [2024-11-20T09:02:11.742Z] =================================================================================================================== 00:08:32.823 [2024-11-20T09:02:11.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:32.823 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68145 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:33.082 rmmod nvme_tcp 00:08:33.082 rmmod nvme_fabrics 00:08:33.082 rmmod nvme_keyring 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 68114 ']' 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 68114 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 68114 ']' 00:08:33.082 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 68114 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68114 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:33.083 killing process with pid 68114 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68114' 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 68114 00:08:33.083 09:02:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 68114 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:08:33.341 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:08:33.600 00:08:33.600 real 0m12.628s 00:08:33.600 user 0m21.364s 00:08:33.600 sys 0m2.128s 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 ************************************ 00:08:33.600 END TEST nvmf_queue_depth 00:08:33.600 ************************************ 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.600 ************************************ 00:08:33.600 START TEST nvmf_nmic 00:08:33.600 ************************************ 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:33.600 * Looking for test storage... 00:08:33.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.600 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.860 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.861 --rc genhtml_branch_coverage=1 00:08:33.861 --rc genhtml_function_coverage=1 00:08:33.861 --rc genhtml_legend=1 00:08:33.861 --rc geninfo_all_blocks=1 00:08:33.861 --rc geninfo_unexecuted_blocks=1 00:08:33.861 00:08:33.861 ' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.861 --rc genhtml_branch_coverage=1 00:08:33.861 --rc genhtml_function_coverage=1 00:08:33.861 --rc genhtml_legend=1 00:08:33.861 --rc geninfo_all_blocks=1 00:08:33.861 --rc geninfo_unexecuted_blocks=1 00:08:33.861 00:08:33.861 ' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.861 --rc genhtml_branch_coverage=1 00:08:33.861 --rc genhtml_function_coverage=1 00:08:33.861 --rc genhtml_legend=1 00:08:33.861 --rc geninfo_all_blocks=1 00:08:33.861 --rc geninfo_unexecuted_blocks=1 00:08:33.861 00:08:33.861 ' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.861 --rc genhtml_branch_coverage=1 00:08:33.861 --rc genhtml_function_coverage=1 00:08:33.861 --rc genhtml_legend=1 00:08:33.861 --rc geninfo_all_blocks=1 00:08:33.861 --rc geninfo_unexecuted_blocks=1 00:08:33.861 00:08:33.861 ' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:33.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:08:33.861 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@280 -- # nvmf_veth_init 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@223 -- # create_target_ns 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # create_main_bridge 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@105 -- # delete_main_bridge 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator0 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target0 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0 up 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target0_br 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target0 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:08:33.862 10.0.0.1 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:08:33.862 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:08:33.863 10.0.0.2 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator0 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target0_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator1 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:08:33.863 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target1 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1 up 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target1_br 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target1 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772163 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:08:34.122 10.0.0.3 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772164 00:08:34.122 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:08:34.123 10.0.0.4 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target1_br 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 2 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:34.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:08:34.123 00:08:34.123 --- 10.0.0.1 ping statistics --- 00:08:34.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.123 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:08:34.123 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:08:34.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:08:34.123 00:08:34.124 --- 10.0.0.2 ping statistics --- 00:08:34.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.124 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:08:34.124 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:34.124 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:08:34.124 00:08:34.124 --- 10.0.0.3 ping statistics --- 00:08:34.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.124 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:08:34.124 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:34.124 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.113 ms 00:08:34.124 00:08:34.124 --- 10.0.0.4 ping statistics --- 00:08:34.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.124 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # return 0 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:34.124 09:02:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:34.124 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=68519 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 68519 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 68519 ']' 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.383 09:02:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:34.383 [2024-11-20 09:02:13.158879] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:34.383 [2024-11-20 09:02:13.159009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.642 [2024-11-20 09:02:13.326994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.642 [2024-11-20 09:02:13.399960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.642 [2024-11-20 09:02:13.400025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.642 [2024-11-20 09:02:13.400040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.642 [2024-11-20 09:02:13.400051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.642 [2024-11-20 09:02:13.400060] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.642 [2024-11-20 09:02:13.401335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.642 [2024-11-20 09:02:13.401427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.642 [2024-11-20 09:02:13.401532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.642 [2024-11-20 09:02:13.401534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.576 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.576 [2024-11-20 09:02:14.325253] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 Malloc0 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 [2024-11-20 09:02:14.389500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 test case1: single bdev can't be used in multiple subsystems 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 [2024-11-20 09:02:14.417298] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:35.577 [2024-11-20 09:02:14.417340] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:35.577 [2024-11-20 09:02:14.417352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.577 2024/11/20 09:02:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:08:35.577 request: 00:08:35.577 { 00:08:35.577 "method": "nvmf_subsystem_add_ns", 00:08:35.577 "params": { 00:08:35.577 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:35.577 "namespace": { 00:08:35.577 "bdev_name": "Malloc0", 00:08:35.577 "no_auto_visible": false 00:08:35.577 } 00:08:35.577 } 00:08:35.577 } 00:08:35.577 Got JSON-RPC error response 00:08:35.577 GoRPCClient: error on JSON-RPC call 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:35.577 Adding namespace failed - expected result. 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:35.577 test case2: host connect to nvmf target in multiple paths 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:35.577 [2024-11-20 09:02:14.429464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.577 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.835 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:36.094 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:36.094 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:36.094 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:36.094 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:36.094 09:02:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:37.996 09:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:37.996 09:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:37.996 09:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.996 09:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:37.996 09:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.996 09:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:37.996 09:02:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:37.996 [global] 00:08:37.996 thread=1 00:08:37.996 invalidate=1 00:08:37.996 rw=write 00:08:37.996 time_based=1 00:08:37.996 runtime=1 00:08:37.996 ioengine=libaio 00:08:37.996 direct=1 00:08:37.996 bs=4096 00:08:37.996 iodepth=1 00:08:37.996 norandommap=0 00:08:37.996 numjobs=1 00:08:37.996 00:08:37.996 verify_dump=1 00:08:37.996 verify_backlog=512 00:08:37.996 verify_state_save=0 00:08:37.996 do_verify=1 00:08:37.996 verify=crc32c-intel 00:08:37.996 [job0] 00:08:37.996 filename=/dev/nvme0n1 00:08:37.996 Could not set queue depth (nvme0n1) 00:08:38.254 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:38.255 fio-3.35 00:08:38.255 Starting 1 thread 00:08:39.630 00:08:39.630 job0: (groupid=0, jobs=1): err= 0: pid=68629: Wed Nov 20 09:02:18 2024 00:08:39.630 read: IOPS=3218, BW=12.6MiB/s (13.2MB/s)(12.6MiB/1001msec) 00:08:39.630 slat (nsec): min=12801, max=60033, avg=16854.39, stdev=5156.84 00:08:39.630 clat (usec): min=126, max=360, avg=145.53, stdev=10.19 00:08:39.630 lat (usec): min=140, max=376, avg=162.38, stdev=12.25 00:08:39.630 clat percentiles (usec): 00:08:39.630 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:08:39.630 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:08:39.630 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 157], 95.00th=[ 161], 00:08:39.630 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 194], 99.95th=[ 302], 00:08:39.630 | 99.99th=[ 359] 00:08:39.630 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:39.630 slat (usec): min=18, max=141, avg=23.11, stdev= 6.88 00:08:39.630 clat (usec): min=46, max=260, avg=106.45, stdev= 8.48 00:08:39.630 lat (usec): min=110, max=402, avg=129.55, stdev=12.02 00:08:39.630 clat percentiles (usec): 00:08:39.630 | 1.00th=[ 95], 5.00th=[ 97], 10.00th=[ 98], 20.00th=[ 100], 00:08:39.630 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 108], 00:08:39.630 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 122], 00:08:39.630 | 99.00th=[ 133], 99.50th=[ 139], 99.90th=[ 147], 99.95th=[ 174], 00:08:39.630 | 99.99th=[ 262] 00:08:39.630 bw ( KiB/s): min=14400, max=14400, per=100.00%, avg=14400.00, stdev= 0.00, samples=1 00:08:39.630 iops : min= 3600, max= 3600, avg=3600.00, stdev= 0.00, samples=1 00:08:39.630 lat (usec) : 50=0.01%, 100=10.56%, 250=89.38%, 500=0.04% 00:08:39.630 cpu : usr=1.90%, sys=11.30%, ctx=6807, majf=0, minf=5 00:08:39.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:39.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:39.630 issued rwts: total=3222,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:39.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:39.630 00:08:39.630 Run status group 0 (all jobs): 00:08:39.630 READ: bw=12.6MiB/s (13.2MB/s), 12.6MiB/s-12.6MiB/s (13.2MB/s-13.2MB/s), io=12.6MiB (13.2MB), run=1001-1001msec 00:08:39.630 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:08:39.630 00:08:39.630 Disk stats (read/write): 00:08:39.630 nvme0n1: ios=3039/3072, merge=0/0, ticks=487/358, in_queue=845, util=91.38% 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:39.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:39.630 rmmod nvme_tcp 00:08:39.630 rmmod nvme_fabrics 00:08:39.630 rmmod nvme_keyring 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 68519 ']' 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 68519 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 68519 ']' 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 68519 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68519 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.630 killing process with pid 68519 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68519' 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 68519 00:08:39.630 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 68519 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:08:39.889 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:08:40.148 ************************************ 00:08:40.148 END TEST nvmf_nmic 00:08:40.148 ************************************ 00:08:40.148 00:08:40.148 real 0m6.525s 00:08:40.148 user 0m21.184s 00:08:40.148 sys 0m1.617s 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:40.148 ************************************ 00:08:40.148 START TEST nvmf_fio_target 00:08:40.148 ************************************ 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:40.148 * Looking for test storage... 00:08:40.148 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.148 09:02:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.408 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.409 --rc genhtml_branch_coverage=1 00:08:40.409 --rc genhtml_function_coverage=1 00:08:40.409 --rc genhtml_legend=1 00:08:40.409 --rc geninfo_all_blocks=1 00:08:40.409 --rc geninfo_unexecuted_blocks=1 00:08:40.409 00:08:40.409 ' 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.409 --rc genhtml_branch_coverage=1 00:08:40.409 --rc genhtml_function_coverage=1 00:08:40.409 --rc genhtml_legend=1 00:08:40.409 --rc geninfo_all_blocks=1 00:08:40.409 --rc geninfo_unexecuted_blocks=1 00:08:40.409 00:08:40.409 ' 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.409 --rc genhtml_branch_coverage=1 00:08:40.409 --rc genhtml_function_coverage=1 00:08:40.409 --rc genhtml_legend=1 00:08:40.409 --rc geninfo_all_blocks=1 00:08:40.409 --rc geninfo_unexecuted_blocks=1 00:08:40.409 00:08:40.409 ' 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.409 --rc genhtml_branch_coverage=1 00:08:40.409 --rc genhtml_function_coverage=1 00:08:40.409 --rc genhtml_legend=1 00:08:40.409 --rc geninfo_all_blocks=1 00:08:40.409 --rc geninfo_unexecuted_blocks=1 00:08:40.409 00:08:40.409 ' 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:40.409 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:40.409 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@223 -- # create_target_ns 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target0 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:08:40.410 10.0.0.1 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:40.410 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:08:40.411 10.0.0.2 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:08:40.411 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772163 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:08:40.681 10.0.0.3 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772164 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:08:40.681 10.0.0.4 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:40.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:08:40.681 00:08:40.681 --- 10.0.0.1 ping statistics --- 00:08:40.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.681 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:08:40.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:08:40.681 00:08:40.681 --- 10.0.0.2 ping statistics --- 00:08:40.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.681 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:08:40.681 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:08:40.682 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:40.682 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:08:40.682 00:08:40.682 --- 10.0.0.3 ping statistics --- 00:08:40.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.682 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:08:40.682 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:40.682 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:08:40.682 00:08:40.682 --- 10.0.0.4 ping statistics --- 00:08:40.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.682 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # return 0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:40.682 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=68869 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 68869 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 68869 ']' 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.941 09:02:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:40.941 [2024-11-20 09:02:19.693042] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:40.941 [2024-11-20 09:02:19.693962] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.941 [2024-11-20 09:02:19.855333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.199 [2024-11-20 09:02:19.927015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.199 [2024-11-20 09:02:19.927077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.199 [2024-11-20 09:02:19.927092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.199 [2024-11-20 09:02:19.927102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.199 [2024-11-20 09:02:19.927111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.199 [2024-11-20 09:02:19.928362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.199 [2024-11-20 09:02:19.928514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.199 [2024-11-20 09:02:19.928647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.199 [2024-11-20 09:02:19.928648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.131 09:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.131 09:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:42.131 09:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:42.131 09:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.131 09:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:42.131 09:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.131 09:02:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:42.389 [2024-11-20 09:02:21.082515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.389 09:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.647 09:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:42.647 09:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:42.904 09:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:42.904 09:02:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.468 09:02:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:43.468 09:02:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:43.726 09:02:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:43.726 09:02:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:43.984 09:02:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.242 09:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:44.242 09:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.808 09:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:44.808 09:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.066 09:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:45.066 09:02:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:45.324 09:02:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:45.583 09:02:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:45.583 09:02:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.842 09:02:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:45.842 09:02:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:46.101 09:02:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.360 [2024-11-20 09:02:25.148477] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.360 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:46.618 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:47.183 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:47.183 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:47.183 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:47.183 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.183 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:47.183 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:47.183 09:02:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:49.085 09:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:49.085 09:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:49.085 09:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.085 09:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:49.085 09:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.085 09:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:49.085 09:02:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:49.344 [global] 00:08:49.344 thread=1 00:08:49.344 invalidate=1 00:08:49.344 rw=write 00:08:49.344 time_based=1 00:08:49.344 runtime=1 00:08:49.344 ioengine=libaio 00:08:49.344 direct=1 00:08:49.344 bs=4096 00:08:49.344 iodepth=1 00:08:49.344 norandommap=0 00:08:49.344 numjobs=1 00:08:49.344 00:08:49.344 verify_dump=1 00:08:49.344 verify_backlog=512 00:08:49.344 verify_state_save=0 00:08:49.344 do_verify=1 00:08:49.344 verify=crc32c-intel 00:08:49.344 [job0] 00:08:49.344 filename=/dev/nvme0n1 00:08:49.344 [job1] 00:08:49.344 filename=/dev/nvme0n2 00:08:49.344 [job2] 00:08:49.344 filename=/dev/nvme0n3 00:08:49.344 [job3] 00:08:49.344 filename=/dev/nvme0n4 00:08:49.344 Could not set queue depth (nvme0n1) 00:08:49.344 Could not set queue depth (nvme0n2) 00:08:49.344 Could not set queue depth (nvme0n3) 00:08:49.344 Could not set queue depth (nvme0n4) 00:08:49.344 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.345 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.345 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.345 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:49.345 fio-3.35 00:08:49.345 Starting 4 threads 00:08:50.753 00:08:50.753 job0: (groupid=0, jobs=1): err= 0: pid=69173: Wed Nov 20 09:02:29 2024 00:08:50.753 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:50.753 slat (nsec): min=12594, max=60017, avg=17875.11, stdev=5238.97 00:08:50.753 clat (usec): min=142, max=2619, avg=295.59, stdev=78.19 00:08:50.753 lat (usec): min=161, max=2641, avg=313.46, stdev=78.48 00:08:50.753 clat percentiles (usec): 00:08:50.753 | 1.00th=[ 194], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:08:50.753 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:08:50.753 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 330], 00:08:50.753 | 99.00th=[ 429], 99.50th=[ 515], 99.90th=[ 1434], 99.95th=[ 2606], 00:08:50.753 | 99.99th=[ 2606] 00:08:50.753 write: IOPS=2023, BW=8096KiB/s (8290kB/s)(8104KiB/1001msec); 0 zone resets 00:08:50.753 slat (usec): min=17, max=112, avg=26.83, stdev= 6.25 00:08:50.753 clat (usec): min=121, max=948, avg=225.45, stdev=26.02 00:08:50.753 lat (usec): min=143, max=978, avg=252.28, stdev=26.79 00:08:50.753 clat percentiles (usec): 00:08:50.753 | 1.00th=[ 190], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:08:50.753 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 227], 00:08:50.753 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 258], 00:08:50.753 | 99.00th=[ 277], 99.50th=[ 310], 99.90th=[ 453], 99.95th=[ 465], 00:08:50.753 | 99.99th=[ 947] 00:08:50.753 bw ( KiB/s): min= 8192, max= 8192, per=22.30%, avg=8192.00, stdev= 0.00, samples=1 00:08:50.753 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:50.753 lat (usec) : 250=53.20%, 500=46.55%, 750=0.11%, 1000=0.06% 00:08:50.753 lat (msec) : 2=0.06%, 4=0.03% 00:08:50.753 cpu : usr=1.80%, sys=6.00%, ctx=3575, majf=0, minf=13 00:08:50.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.753 issued rwts: total=1536,2026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.753 job1: (groupid=0, jobs=1): err= 0: pid=69174: Wed Nov 20 09:02:29 2024 00:08:50.753 read: IOPS=2316, BW=9267KiB/s (9489kB/s)(9276KiB/1001msec) 00:08:50.753 slat (nsec): min=11104, max=61434, avg=17784.01, stdev=6164.77 00:08:50.753 clat (usec): min=137, max=2021, avg=203.87, stdev=72.57 00:08:50.753 lat (usec): min=153, max=2038, avg=221.65, stdev=70.97 00:08:50.753 clat percentiles (usec): 00:08:50.753 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:08:50.753 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 176], 00:08:50.753 | 70.00th=[ 260], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:08:50.753 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 404], 99.95th=[ 404], 00:08:50.753 | 99.99th=[ 2024] 00:08:50.753 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:50.753 slat (usec): min=11, max=112, avg=27.53, stdev=10.00 00:08:50.753 clat (usec): min=103, max=6055, avg=158.25, stdev=190.45 00:08:50.753 lat (usec): min=125, max=6077, avg=185.79, stdev=190.45 00:08:50.753 clat percentiles (usec): 00:08:50.753 | 1.00th=[ 110], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 123], 00:08:50.753 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:08:50.753 | 70.00th=[ 143], 80.00th=[ 194], 90.00th=[ 221], 95.00th=[ 235], 00:08:50.753 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 3884], 99.95th=[ 4555], 00:08:50.753 | 99.99th=[ 6063] 00:08:50.753 bw ( KiB/s): min=12288, max=12288, per=33.45%, avg=12288.00, stdev= 0.00, samples=1 00:08:50.753 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:50.753 lat (usec) : 250=83.28%, 500=16.52%, 750=0.04% 00:08:50.753 lat (msec) : 2=0.04%, 4=0.08%, 10=0.04% 00:08:50.753 cpu : usr=2.90%, sys=8.00%, ctx=4879, majf=0, minf=5 00:08:50.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.754 issued rwts: total=2319,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.754 job2: (groupid=0, jobs=1): err= 0: pid=69175: Wed Nov 20 09:02:29 2024 00:08:50.754 read: IOPS=2183, BW=8735KiB/s (8945kB/s)(8744KiB/1001msec) 00:08:50.754 slat (nsec): min=10993, max=76580, avg=15629.89, stdev=4426.54 00:08:50.754 clat (usec): min=149, max=7401, avg=218.71, stdev=170.34 00:08:50.754 lat (usec): min=166, max=7461, avg=234.34, stdev=171.29 00:08:50.754 clat percentiles (usec): 00:08:50.754 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:08:50.754 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 194], 00:08:50.754 | 70.00th=[ 245], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 306], 00:08:50.754 | 99.00th=[ 379], 99.50th=[ 529], 99.90th=[ 1237], 99.95th=[ 1795], 00:08:50.754 | 99.99th=[ 7373] 00:08:50.754 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:50.754 slat (nsec): min=11351, max=86202, avg=22702.25, stdev=6470.71 00:08:50.754 clat (usec): min=110, max=311, avg=164.30, stdev=40.82 00:08:50.754 lat (usec): min=134, max=340, avg=187.00, stdev=40.54 00:08:50.754 clat percentiles (usec): 00:08:50.754 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 135], 00:08:50.754 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:08:50.754 | 70.00th=[ 163], 80.00th=[ 210], 90.00th=[ 229], 95.00th=[ 251], 00:08:50.754 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 310], 00:08:50.754 | 99.99th=[ 314] 00:08:50.754 bw ( KiB/s): min=12288, max=12288, per=33.45%, avg=12288.00, stdev= 0.00, samples=1 00:08:50.754 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:50.754 lat (usec) : 250=83.67%, 500=16.06%, 750=0.13%, 1000=0.08% 00:08:50.754 lat (msec) : 2=0.04%, 10=0.02% 00:08:50.754 cpu : usr=1.50%, sys=7.40%, ctx=4747, majf=0, minf=3 00:08:50.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.754 issued rwts: total=2186,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.754 job3: (groupid=0, jobs=1): err= 0: pid=69176: Wed Nov 20 09:02:29 2024 00:08:50.754 read: IOPS=1554, BW=6218KiB/s (6367kB/s)(6224KiB/1001msec) 00:08:50.754 slat (usec): min=13, max=152, avg=21.67, stdev= 7.54 00:08:50.754 clat (usec): min=122, max=1309, avg=285.41, stdev=41.67 00:08:50.754 lat (usec): min=167, max=1339, avg=307.07, stdev=41.94 00:08:50.754 clat percentiles (usec): 00:08:50.754 | 1.00th=[ 174], 5.00th=[ 253], 10.00th=[ 262], 20.00th=[ 269], 00:08:50.754 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:08:50.754 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:08:50.754 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 865], 99.95th=[ 1303], 00:08:50.754 | 99.99th=[ 1303] 00:08:50.754 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:50.754 slat (usec): min=22, max=133, avg=30.13, stdev= 9.32 00:08:50.754 clat (usec): min=124, max=819, avg=220.66, stdev=27.98 00:08:50.754 lat (usec): min=154, max=868, avg=250.78, stdev=27.70 00:08:50.754 clat percentiles (usec): 00:08:50.754 | 1.00th=[ 151], 5.00th=[ 190], 10.00th=[ 198], 20.00th=[ 206], 00:08:50.754 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:08:50.754 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 253], 00:08:50.754 | 99.00th=[ 277], 99.50th=[ 338], 99.90th=[ 412], 99.95th=[ 594], 00:08:50.754 | 99.99th=[ 824] 00:08:50.754 bw ( KiB/s): min= 8192, max= 8192, per=22.30%, avg=8192.00, stdev= 0.00, samples=1 00:08:50.754 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:50.754 lat (usec) : 250=54.69%, 500=45.14%, 750=0.08%, 1000=0.06% 00:08:50.754 lat (msec) : 2=0.03% 00:08:50.754 cpu : usr=1.90%, sys=7.20%, ctx=3612, majf=0, minf=15 00:08:50.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:50.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.754 issued rwts: total=1556,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:50.754 00:08:50.754 Run status group 0 (all jobs): 00:08:50.754 READ: bw=29.6MiB/s (31.1MB/s), 6138KiB/s-9267KiB/s (6285kB/s-9489kB/s), io=29.7MiB (31.1MB), run=1001-1001msec 00:08:50.754 WRITE: bw=35.9MiB/s (37.6MB/s), 8096KiB/s-9.99MiB/s (8290kB/s-10.5MB/s), io=35.9MiB (37.7MB), run=1001-1001msec 00:08:50.754 00:08:50.754 Disk stats (read/write): 00:08:50.754 nvme0n1: ios=1560/1536, merge=0/0, ticks=465/359, in_queue=824, util=88.48% 00:08:50.754 nvme0n2: ios=2096/2367, merge=0/0, ticks=453/374, in_queue=827, util=89.08% 00:08:50.754 nvme0n3: ios=2065/2183, merge=0/0, ticks=741/351, in_queue=1092, util=93.33% 00:08:50.754 nvme0n4: ios=1536/1536, merge=0/0, ticks=443/361, in_queue=804, util=89.89% 00:08:50.754 09:02:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:50.754 [global] 00:08:50.754 thread=1 00:08:50.754 invalidate=1 00:08:50.754 rw=randwrite 00:08:50.754 time_based=1 00:08:50.754 runtime=1 00:08:50.754 ioengine=libaio 00:08:50.754 direct=1 00:08:50.754 bs=4096 00:08:50.754 iodepth=1 00:08:50.754 norandommap=0 00:08:50.754 numjobs=1 00:08:50.754 00:08:50.754 verify_dump=1 00:08:50.754 verify_backlog=512 00:08:50.754 verify_state_save=0 00:08:50.754 do_verify=1 00:08:50.754 verify=crc32c-intel 00:08:50.754 [job0] 00:08:50.754 filename=/dev/nvme0n1 00:08:50.754 [job1] 00:08:50.754 filename=/dev/nvme0n2 00:08:50.754 [job2] 00:08:50.754 filename=/dev/nvme0n3 00:08:50.754 [job3] 00:08:50.754 filename=/dev/nvme0n4 00:08:50.754 Could not set queue depth (nvme0n1) 00:08:50.754 Could not set queue depth (nvme0n2) 00:08:50.754 Could not set queue depth (nvme0n3) 00:08:50.754 Could not set queue depth (nvme0n4) 00:08:50.754 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:50.754 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:50.754 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:50.754 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:50.754 fio-3.35 00:08:50.754 Starting 4 threads 00:08:52.130 00:08:52.130 job0: (groupid=0, jobs=1): err= 0: pid=69237: Wed Nov 20 09:02:30 2024 00:08:52.130 read: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:08:52.130 slat (nsec): min=12180, max=43814, avg=15687.17, stdev=2797.26 00:08:52.130 clat (usec): min=139, max=633, avg=165.39, stdev=20.60 00:08:52.130 lat (usec): min=153, max=648, avg=181.08, stdev=20.84 00:08:52.130 clat percentiles (usec): 00:08:52.130 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:08:52.130 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:08:52.130 | 70.00th=[ 169], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 182], 00:08:52.130 | 99.00th=[ 198], 99.50th=[ 314], 99.90th=[ 461], 99.95th=[ 498], 00:08:52.130 | 99.99th=[ 635] 00:08:52.130 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:52.130 slat (usec): min=18, max=121, avg=22.19, stdev= 4.67 00:08:52.130 clat (usec): min=104, max=756, avg=131.11, stdev=15.47 00:08:52.130 lat (usec): min=126, max=775, avg=153.29, stdev=16.72 00:08:52.130 clat percentiles (usec): 00:08:52.130 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:08:52.130 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 130], 60.00th=[ 133], 00:08:52.130 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:08:52.130 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 204], 99.95th=[ 314], 00:08:52.130 | 99.99th=[ 758] 00:08:52.130 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:08:52.130 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:52.130 lat (usec) : 250=99.61%, 500=0.35%, 750=0.02%, 1000=0.02% 00:08:52.130 cpu : usr=2.20%, sys=8.60%, ctx=5939, majf=0, minf=11 00:08:52.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.130 issued rwts: total=2866,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.130 job1: (groupid=0, jobs=1): err= 0: pid=69238: Wed Nov 20 09:02:30 2024 00:08:52.130 read: IOPS=2856, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec) 00:08:52.130 slat (nsec): min=12368, max=41568, avg=14090.86, stdev=2451.27 00:08:52.130 clat (usec): min=135, max=5050, avg=172.15, stdev=182.49 00:08:52.130 lat (usec): min=148, max=5064, avg=186.25, stdev=182.94 00:08:52.130 clat percentiles (usec): 00:08:52.130 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:08:52.130 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:08:52.130 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:08:52.130 | 99.00th=[ 194], 99.50th=[ 355], 99.90th=[ 3884], 99.95th=[ 3982], 00:08:52.130 | 99.99th=[ 5080] 00:08:52.130 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:52.130 slat (usec): min=17, max=121, avg=19.57, stdev= 3.57 00:08:52.130 clat (usec): min=104, max=251, avg=129.56, stdev= 9.58 00:08:52.130 lat (usec): min=123, max=373, avg=149.13, stdev=10.73 00:08:52.130 clat percentiles (usec): 00:08:52.130 | 1.00th=[ 111], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 123], 00:08:52.130 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:08:52.130 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:08:52.130 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 172], 99.95th=[ 176], 00:08:52.130 | 99.99th=[ 253] 00:08:52.130 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:08:52.130 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:52.130 lat (usec) : 250=99.70%, 500=0.10%, 750=0.05% 00:08:52.130 lat (msec) : 2=0.05%, 4=0.08%, 10=0.02% 00:08:52.130 cpu : usr=2.00%, sys=7.60%, ctx=5931, majf=0, minf=11 00:08:52.130 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.130 issued rwts: total=2859,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.130 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.130 job2: (groupid=0, jobs=1): err= 0: pid=69239: Wed Nov 20 09:02:30 2024 00:08:52.130 read: IOPS=2616, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:08:52.130 slat (nsec): min=12828, max=38118, avg=14541.31, stdev=2527.23 00:08:52.130 clat (usec): min=147, max=351, avg=173.59, stdev=14.13 00:08:52.130 lat (usec): min=160, max=371, avg=188.13, stdev=14.60 00:08:52.130 clat percentiles (usec): 00:08:52.130 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:08:52.130 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:08:52.130 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 198], 00:08:52.130 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 262], 99.95th=[ 285], 00:08:52.130 | 99.99th=[ 351] 00:08:52.130 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:52.130 slat (usec): min=17, max=125, avg=21.49, stdev= 6.72 00:08:52.130 clat (usec): min=113, max=1694, avg=140.64, stdev=32.66 00:08:52.130 lat (usec): min=132, max=1712, avg=162.13, stdev=33.87 00:08:52.130 clat percentiles (usec): 00:08:52.130 | 1.00th=[ 119], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 130], 00:08:52.130 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:08:52.130 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 163], 00:08:52.130 | 99.00th=[ 188], 99.50th=[ 202], 99.90th=[ 231], 99.95th=[ 668], 00:08:52.130 | 99.99th=[ 1696] 00:08:52.130 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:08:52.131 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:52.131 lat (usec) : 250=99.88%, 500=0.09%, 750=0.02% 00:08:52.131 lat (msec) : 2=0.02% 00:08:52.131 cpu : usr=1.80%, sys=8.10%, ctx=5691, majf=0, minf=11 00:08:52.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.131 issued rwts: total=2619,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.131 job3: (groupid=0, jobs=1): err= 0: pid=69240: Wed Nov 20 09:02:30 2024 00:08:52.131 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(9.97MiB/1001msec) 00:08:52.131 slat (usec): min=14, max=178, avg=20.35, stdev= 7.35 00:08:52.131 clat (usec): min=156, max=999, avg=186.75, stdev=20.97 00:08:52.131 lat (usec): min=172, max=1039, avg=207.10, stdev=23.61 00:08:52.131 clat percentiles (usec): 00:08:52.131 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:08:52.131 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:08:52.131 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 208], 00:08:52.131 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 293], 99.95th=[ 359], 00:08:52.131 | 99.99th=[ 1004] 00:08:52.131 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:52.131 slat (nsec): min=18293, max=98732, avg=29584.68, stdev=9593.55 00:08:52.131 clat (usec): min=117, max=2086, avg=149.92, stdev=50.21 00:08:52.131 lat (usec): min=142, max=2126, avg=179.50, stdev=52.31 00:08:52.131 clat percentiles (usec): 00:08:52.131 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:08:52.131 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:08:52.131 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:08:52.131 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 494], 99.95th=[ 1598], 00:08:52.131 | 99.99th=[ 2089] 00:08:52.131 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:08:52.131 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:52.131 lat (usec) : 250=99.71%, 500=0.23%, 1000=0.02% 00:08:52.131 lat (msec) : 2=0.02%, 4=0.02% 00:08:52.131 cpu : usr=2.70%, sys=9.60%, ctx=5112, majf=0, minf=11 00:08:52.131 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.131 issued rwts: total=2552,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.131 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.131 00:08:52.131 Run status group 0 (all jobs): 00:08:52.131 READ: bw=42.5MiB/s (44.6MB/s), 9.96MiB/s-11.2MiB/s (10.4MB/s-11.7MB/s), io=42.6MiB (44.6MB), run=1001-1001msec 00:08:52.131 WRITE: bw=46.0MiB/s (48.2MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=46.0MiB (48.2MB), run=1001-1001msec 00:08:52.131 00:08:52.131 Disk stats (read/write): 00:08:52.131 nvme0n1: ios=2574/2560, merge=0/0, ticks=452/364, in_queue=816, util=87.66% 00:08:52.131 nvme0n2: ios=2559/2560, merge=0/0, ticks=474/344, in_queue=818, util=88.64% 00:08:52.131 nvme0n3: ios=2327/2560, merge=0/0, ticks=407/389, in_queue=796, util=89.15% 00:08:52.131 nvme0n4: ios=2048/2350, merge=0/0, ticks=400/382, in_queue=782, util=89.71% 00:08:52.131 09:02:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:52.131 [global] 00:08:52.131 thread=1 00:08:52.131 invalidate=1 00:08:52.131 rw=write 00:08:52.131 time_based=1 00:08:52.131 runtime=1 00:08:52.131 ioengine=libaio 00:08:52.131 direct=1 00:08:52.131 bs=4096 00:08:52.131 iodepth=128 00:08:52.131 norandommap=0 00:08:52.131 numjobs=1 00:08:52.131 00:08:52.131 verify_dump=1 00:08:52.131 verify_backlog=512 00:08:52.131 verify_state_save=0 00:08:52.131 do_verify=1 00:08:52.131 verify=crc32c-intel 00:08:52.131 [job0] 00:08:52.131 filename=/dev/nvme0n1 00:08:52.131 [job1] 00:08:52.131 filename=/dev/nvme0n2 00:08:52.131 [job2] 00:08:52.131 filename=/dev/nvme0n3 00:08:52.131 [job3] 00:08:52.131 filename=/dev/nvme0n4 00:08:52.131 Could not set queue depth (nvme0n1) 00:08:52.131 Could not set queue depth (nvme0n2) 00:08:52.131 Could not set queue depth (nvme0n3) 00:08:52.131 Could not set queue depth (nvme0n4) 00:08:52.131 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.131 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.131 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.131 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:52.131 fio-3.35 00:08:52.131 Starting 4 threads 00:08:53.505 00:08:53.505 job0: (groupid=0, jobs=1): err= 0: pid=69293: Wed Nov 20 09:02:32 2024 00:08:53.505 read: IOPS=5513, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1004msec) 00:08:53.505 slat (usec): min=5, max=2702, avg=88.22, stdev=396.73 00:08:53.505 clat (usec): min=365, max=14322, avg=11466.57, stdev=1051.29 00:08:53.505 lat (usec): min=3029, max=15982, avg=11554.79, stdev=990.12 00:08:53.505 clat percentiles (usec): 00:08:53.505 | 1.00th=[ 6587], 5.00th=[ 9765], 10.00th=[10421], 20.00th=[11207], 00:08:53.505 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:08:53.505 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12518], 00:08:53.505 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14222], 99.95th=[14353], 00:08:53.505 | 99.99th=[14353] 00:08:53.505 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:08:53.505 slat (usec): min=8, max=2681, avg=83.31, stdev=301.47 00:08:53.505 clat (usec): min=8680, max=14261, avg=11250.95, stdev=1021.22 00:08:53.505 lat (usec): min=8724, max=14299, avg=11334.26, stdev=1008.95 00:08:53.505 clat percentiles (usec): 00:08:53.505 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[10028], 00:08:53.505 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:08:53.505 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12649], 00:08:53.505 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14222], 99.95th=[14222], 00:08:53.505 | 99.99th=[14222] 00:08:53.505 bw ( KiB/s): min=21072, max=23984, per=34.64%, avg=22528.00, stdev=2059.09, samples=2 00:08:53.505 iops : min= 5268, max= 5996, avg=5632.00, stdev=514.77, samples=2 00:08:53.505 lat (usec) : 500=0.01% 00:08:53.505 lat (msec) : 4=0.29%, 10=13.48%, 20=86.22% 00:08:53.505 cpu : usr=4.99%, sys=15.45%, ctx=725, majf=0, minf=9 00:08:53.505 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:53.505 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.505 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.505 issued rwts: total=5536,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.505 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.505 job1: (groupid=0, jobs=1): err= 0: pid=69294: Wed Nov 20 09:02:32 2024 00:08:53.505 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:08:53.505 slat (usec): min=6, max=8295, avg=191.90, stdev=862.25 00:08:53.505 clat (usec): min=18493, max=31251, avg=25512.91, stdev=2215.33 00:08:53.505 lat (usec): min=20468, max=31277, avg=25704.81, stdev=2063.21 00:08:53.505 clat percentiles (usec): 00:08:53.505 | 1.00th=[19792], 5.00th=[22676], 10.00th=[23725], 20.00th=[23987], 00:08:53.505 | 30.00th=[24249], 40.00th=[24511], 50.00th=[25035], 60.00th=[25297], 00:08:53.505 | 70.00th=[26084], 80.00th=[27132], 90.00th=[28705], 95.00th=[30540], 00:08:53.505 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327], 00:08:53.505 | 99.99th=[31327] 00:08:53.505 write: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec); 0 zone resets 00:08:53.505 slat (usec): min=13, max=8864, avg=179.35, stdev=785.39 00:08:53.505 clat (usec): min=270, max=29541, avg=22452.98, stdev=3046.95 00:08:53.505 lat (usec): min=4527, max=29564, avg=22632.34, stdev=2968.58 00:08:53.505 clat percentiles (usec): 00:08:53.505 | 1.00th=[ 5342], 5.00th=[17433], 10.00th=[19530], 20.00th=[21627], 00:08:53.505 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:08:53.505 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24249], 95.00th=[25297], 00:08:53.505 | 99.00th=[27919], 99.50th=[28705], 99.90th=[29492], 99.95th=[29492], 00:08:53.505 | 99.99th=[29492] 00:08:53.505 bw ( KiB/s): min= 8608, max=12160, per=15.97%, avg=10384.00, stdev=2511.64, samples=2 00:08:53.505 iops : min= 2152, max= 3040, avg=2596.00, stdev=627.91, samples=2 00:08:53.505 lat (usec) : 500=0.02% 00:08:53.505 lat (msec) : 10=0.61%, 20=6.80%, 50=92.58% 00:08:53.505 cpu : usr=2.79%, sys=8.27%, ctx=255, majf=0, minf=13 00:08:53.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.506 issued rwts: total=2560,2721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.506 job2: (groupid=0, jobs=1): err= 0: pid=69295: Wed Nov 20 09:02:32 2024 00:08:53.506 read: IOPS=4631, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1002msec) 00:08:53.506 slat (usec): min=4, max=3157, avg=99.57, stdev=469.11 00:08:53.506 clat (usec): min=359, max=15381, avg=13168.35, stdev=1068.40 00:08:53.506 lat (usec): min=3481, max=17448, avg=13267.92, stdev=974.13 00:08:53.506 clat percentiles (usec): 00:08:53.506 | 1.00th=[10421], 5.00th=[11207], 10.00th=[12649], 20.00th=[13042], 00:08:53.506 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13304], 60.00th=[13435], 00:08:53.506 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[13960], 00:08:53.506 | 99.00th=[14615], 99.50th=[14615], 99.90th=[15401], 99.95th=[15401], 00:08:53.506 | 99.99th=[15401] 00:08:53.506 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:08:53.506 slat (usec): min=10, max=3948, avg=97.81, stdev=422.56 00:08:53.506 clat (usec): min=4091, max=16339, avg=12779.60, stdev=1441.11 00:08:53.506 lat (usec): min=4160, max=16359, avg=12877.41, stdev=1438.12 00:08:53.506 clat percentiles (usec): 00:08:53.506 | 1.00th=[10421], 5.00th=[10945], 10.00th=[11207], 20.00th=[11338], 00:08:53.506 | 30.00th=[11600], 40.00th=[11731], 50.00th=[13304], 60.00th=[13566], 00:08:53.506 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14353], 95.00th=[14615], 00:08:53.506 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16319], 99.95th=[16319], 00:08:53.506 | 99.99th=[16319] 00:08:53.506 bw ( KiB/s): min=19720, max=20521, per=30.94%, avg=20120.50, stdev=566.39, samples=2 00:08:53.506 iops : min= 4930, max= 5130, avg=5030.00, stdev=141.42, samples=2 00:08:53.506 lat (usec) : 500=0.01% 00:08:53.506 lat (msec) : 4=0.28%, 10=0.50%, 20=99.21% 00:08:53.506 cpu : usr=5.19%, sys=12.59%, ctx=522, majf=0, minf=17 00:08:53.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.506 issued rwts: total=4641,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.506 job3: (groupid=0, jobs=1): err= 0: pid=69296: Wed Nov 20 09:02:32 2024 00:08:53.506 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:08:53.506 slat (usec): min=6, max=7849, avg=193.72, stdev=791.97 00:08:53.506 clat (usec): min=15934, max=36684, avg=23952.36, stdev=2552.49 00:08:53.506 lat (usec): min=16242, max=36702, avg=24146.08, stdev=2489.62 00:08:53.506 clat percentiles (usec): 00:08:53.506 | 1.00th=[17957], 5.00th=[19792], 10.00th=[20841], 20.00th=[22152], 00:08:53.506 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:08:53.506 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26084], 95.00th=[27657], 00:08:53.506 | 99.00th=[32375], 99.50th=[33817], 99.90th=[36439], 99.95th=[36439], 00:08:53.506 | 99.99th=[36439] 00:08:53.506 write: IOPS=2854, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1004msec); 0 zone resets 00:08:53.506 slat (usec): min=15, max=5584, avg=167.47, stdev=735.52 00:08:53.506 clat (usec): min=2059, max=34525, avg=22758.68, stdev=3684.49 00:08:53.506 lat (usec): min=7347, max=34550, avg=22926.16, stdev=3625.73 00:08:53.506 clat percentiles (usec): 00:08:53.506 | 1.00th=[ 8094], 5.00th=[16909], 10.00th=[18482], 20.00th=[20579], 00:08:53.506 | 30.00th=[22152], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:08:53.506 | 70.00th=[23987], 80.00th=[24511], 90.00th=[26346], 95.00th=[28705], 00:08:53.506 | 99.00th=[32375], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:08:53.506 | 99.99th=[34341] 00:08:53.506 bw ( KiB/s): min= 9616, max=12312, per=16.86%, avg=10964.00, stdev=1906.36, samples=2 00:08:53.506 iops : min= 2404, max= 3078, avg=2741.00, stdev=476.59, samples=2 00:08:53.506 lat (msec) : 4=0.02%, 10=0.59%, 20=10.69%, 50=88.70% 00:08:53.506 cpu : usr=2.39%, sys=9.37%, ctx=297, majf=0, minf=13 00:08:53.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:53.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:53.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:53.506 issued rwts: total=2560,2866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:53.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:53.506 00:08:53.506 Run status group 0 (all jobs): 00:08:53.506 READ: bw=59.5MiB/s (62.3MB/s), 9.95MiB/s-21.5MiB/s (10.4MB/s-22.6MB/s), io=59.8MiB (62.7MB), run=1002-1005msec 00:08:53.506 WRITE: bw=63.5MiB/s (66.6MB/s), 10.6MiB/s-21.9MiB/s (11.1MB/s-23.0MB/s), io=63.8MiB (66.9MB), run=1002-1005msec 00:08:53.506 00:08:53.506 Disk stats (read/write): 00:08:53.506 nvme0n1: ios=4658/4935, merge=0/0, ticks=12223/12188, in_queue=24411, util=87.27% 00:08:53.506 nvme0n2: ios=2063/2400, merge=0/0, ticks=12496/12736, in_queue=25232, util=87.62% 00:08:53.506 nvme0n3: ios=4096/4201, merge=0/0, ticks=12515/11744, in_queue=24259, util=89.13% 00:08:53.506 nvme0n4: ios=2057/2560, merge=0/0, ticks=12194/13072, in_queue=25266, util=89.60% 00:08:53.506 09:02:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:53.506 [global] 00:08:53.506 thread=1 00:08:53.506 invalidate=1 00:08:53.506 rw=randwrite 00:08:53.506 time_based=1 00:08:53.506 runtime=1 00:08:53.506 ioengine=libaio 00:08:53.506 direct=1 00:08:53.506 bs=4096 00:08:53.506 iodepth=128 00:08:53.506 norandommap=0 00:08:53.506 numjobs=1 00:08:53.506 00:08:53.506 verify_dump=1 00:08:53.506 verify_backlog=512 00:08:53.506 verify_state_save=0 00:08:53.506 do_verify=1 00:08:53.506 verify=crc32c-intel 00:08:53.506 [job0] 00:08:53.506 filename=/dev/nvme0n1 00:08:53.506 [job1] 00:08:53.506 filename=/dev/nvme0n2 00:08:53.506 [job2] 00:08:53.506 filename=/dev/nvme0n3 00:08:53.506 [job3] 00:08:53.506 filename=/dev/nvme0n4 00:08:53.506 Could not set queue depth (nvme0n1) 00:08:53.506 Could not set queue depth (nvme0n2) 00:08:53.506 Could not set queue depth (nvme0n3) 00:08:53.506 Could not set queue depth (nvme0n4) 00:08:53.506 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:53.506 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:53.506 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:53.506 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:53.506 fio-3.35 00:08:53.506 Starting 4 threads 00:08:54.884 00:08:54.884 job0: (groupid=0, jobs=1): err= 0: pid=69355: Wed Nov 20 09:02:33 2024 00:08:54.884 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec) 00:08:54.884 slat (usec): min=5, max=22568, avg=174.63, stdev=1186.52 00:08:54.884 clat (usec): min=9143, max=47414, avg=21429.01, stdev=6550.34 00:08:54.884 lat (usec): min=9157, max=47431, avg=21603.64, stdev=6618.13 00:08:54.884 clat percentiles (usec): 00:08:54.884 | 1.00th=[ 9110], 5.00th=[10945], 10.00th=[11338], 20.00th=[15926], 00:08:54.884 | 30.00th=[18482], 40.00th=[20579], 50.00th=[21627], 60.00th=[23462], 00:08:54.884 | 70.00th=[24511], 80.00th=[25297], 90.00th=[28705], 95.00th=[32375], 00:08:54.884 | 99.00th=[43254], 99.50th=[45876], 99.90th=[47449], 99.95th=[47449], 00:08:54.884 | 99.99th=[47449] 00:08:54.884 write: IOPS=2874, BW=11.2MiB/s (11.8MB/s)(11.4MiB/1018msec); 0 zone resets 00:08:54.884 slat (usec): min=5, max=14834, avg=180.83, stdev=820.35 00:08:54.884 clat (usec): min=2898, max=51679, avg=25361.43, stdev=7896.86 00:08:54.884 lat (usec): min=2920, max=51691, avg=25542.26, stdev=7940.57 00:08:54.884 clat percentiles (usec): 00:08:54.884 | 1.00th=[ 8225], 5.00th=[10814], 10.00th=[18220], 20.00th=[19792], 00:08:54.884 | 30.00th=[22938], 40.00th=[24249], 50.00th=[25035], 60.00th=[25297], 00:08:54.884 | 70.00th=[25822], 80.00th=[27657], 90.00th=[37487], 95.00th=[41681], 00:08:54.884 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:08:54.884 | 99.99th=[51643] 00:08:54.884 bw ( KiB/s): min=10104, max=12288, per=18.34%, avg=11196.00, stdev=1544.32, samples=2 00:08:54.884 iops : min= 2526, max= 3072, avg=2799.00, stdev=386.08, samples=2 00:08:54.884 lat (msec) : 4=0.18%, 10=3.34%, 20=24.57%, 50=71.67%, 100=0.24% 00:08:54.884 cpu : usr=3.34%, sys=6.78%, ctx=385, majf=0, minf=13 00:08:54.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:08:54.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.884 issued rwts: total=2560,2926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.884 job1: (groupid=0, jobs=1): err= 0: pid=69356: Wed Nov 20 09:02:33 2024 00:08:54.884 read: IOPS=4661, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1005msec) 00:08:54.884 slat (usec): min=4, max=8033, avg=100.44, stdev=510.46 00:08:54.884 clat (usec): min=2849, max=27292, avg=12884.31, stdev=3567.88 00:08:54.884 lat (usec): min=4147, max=27303, avg=12984.76, stdev=3601.37 00:08:54.884 clat percentiles (usec): 00:08:54.884 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10683], 00:08:54.884 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125], 00:08:54.884 | 70.00th=[12649], 80.00th=[14353], 90.00th=[19268], 95.00th=[20841], 00:08:54.884 | 99.00th=[24249], 99.50th=[25822], 99.90th=[26608], 99.95th=[27395], 00:08:54.884 | 99.99th=[27395] 00:08:54.884 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:08:54.884 slat (usec): min=4, max=9175, avg=96.32, stdev=529.47 00:08:54.884 clat (usec): min=3626, max=28699, avg=12990.71, stdev=4207.63 00:08:54.884 lat (usec): min=3648, max=30193, avg=13087.03, stdev=4260.86 00:08:54.884 clat percentiles (usec): 00:08:54.884 | 1.00th=[ 6325], 5.00th=[ 7635], 10.00th=[ 9634], 20.00th=[10421], 00:08:54.884 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:08:54.884 | 70.00th=[12387], 80.00th=[18220], 90.00th=[19530], 95.00th=[20841], 00:08:54.884 | 99.00th=[25560], 99.50th=[26870], 99.90th=[28443], 99.95th=[28705], 00:08:54.884 | 99.99th=[28705] 00:08:54.884 bw ( KiB/s): min=20072, max=20480, per=33.21%, avg=20276.00, stdev=288.50, samples=2 00:08:54.884 iops : min= 5018, max= 5120, avg=5069.00, stdev=72.12, samples=2 00:08:54.884 lat (msec) : 4=0.15%, 10=10.84%, 20=81.42%, 50=7.59% 00:08:54.884 cpu : usr=3.98%, sys=13.05%, ctx=656, majf=0, minf=13 00:08:54.884 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:54.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.884 issued rwts: total=4685,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.884 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.884 job2: (groupid=0, jobs=1): err= 0: pid=69357: Wed Nov 20 09:02:33 2024 00:08:54.884 read: IOPS=4178, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1006msec) 00:08:54.884 slat (usec): min=3, max=11391, avg=115.16, stdev=691.63 00:08:54.884 clat (usec): min=1126, max=30841, avg=14684.88, stdev=4299.23 00:08:54.884 lat (usec): min=2595, max=31191, avg=14800.03, stdev=4336.42 00:08:54.884 clat percentiles (usec): 00:08:54.884 | 1.00th=[ 5473], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11076], 00:08:54.884 | 30.00th=[12387], 40.00th=[12649], 50.00th=[13304], 60.00th=[14746], 00:08:54.884 | 70.00th=[16450], 80.00th=[18744], 90.00th=[21103], 95.00th=[22938], 00:08:54.884 | 99.00th=[25035], 99.50th=[27919], 99.90th=[30802], 99.95th=[30802], 00:08:54.884 | 99.99th=[30802] 00:08:54.884 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:08:54.884 slat (usec): min=5, max=10135, avg=103.78, stdev=600.11 00:08:54.884 clat (usec): min=4584, max=30587, avg=14135.22, stdev=4280.73 00:08:54.884 lat (usec): min=4606, max=30613, avg=14239.00, stdev=4337.83 00:08:54.884 clat percentiles (usec): 00:08:54.884 | 1.00th=[ 5145], 5.00th=[ 6915], 10.00th=[ 8455], 20.00th=[12125], 00:08:54.884 | 30.00th=[12649], 40.00th=[13173], 50.00th=[13304], 60.00th=[13566], 00:08:54.884 | 70.00th=[13960], 80.00th=[18744], 90.00th=[20317], 95.00th=[21365], 00:08:54.884 | 99.00th=[25560], 99.50th=[28967], 99.90th=[30540], 99.95th=[30540], 00:08:54.884 | 99.99th=[30540] 00:08:54.884 bw ( KiB/s): min=16816, max=19927, per=30.09%, avg=18371.50, stdev=2199.81, samples=2 00:08:54.884 iops : min= 4204, max= 4981, avg=4592.50, stdev=549.42, samples=2 00:08:54.884 lat (msec) : 2=0.01%, 4=0.01%, 10=10.24%, 20=76.79%, 50=12.95% 00:08:54.884 cpu : usr=3.98%, sys=10.55%, ctx=652, majf=0, minf=13 00:08:54.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:54.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.885 issued rwts: total=4204,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.885 job3: (groupid=0, jobs=1): err= 0: pid=69358: Wed Nov 20 09:02:33 2024 00:08:54.885 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(10.0MiB/1018msec) 00:08:54.885 slat (usec): min=4, max=23485, avg=210.95, stdev=1459.91 00:08:54.885 clat (usec): min=6607, max=69544, avg=24930.74, stdev=14811.38 00:08:54.885 lat (usec): min=6620, max=69561, avg=25141.69, stdev=14902.83 00:08:54.885 clat percentiles (usec): 00:08:54.885 | 1.00th=[ 7046], 5.00th=[11994], 10.00th=[12649], 20.00th=[13960], 00:08:54.885 | 30.00th=[14877], 40.00th=[15795], 50.00th=[19006], 60.00th=[24773], 00:08:54.885 | 70.00th=[26084], 80.00th=[31065], 90.00th=[52167], 95.00th=[58459], 00:08:54.885 | 99.00th=[67634], 99.50th=[68682], 99.90th=[69731], 99.95th=[69731], 00:08:54.885 | 99.99th=[69731] 00:08:54.885 write: IOPS=2834, BW=11.1MiB/s (11.6MB/s)(11.3MiB/1018msec); 0 zone resets 00:08:54.885 slat (usec): min=6, max=24594, avg=151.61, stdev=794.14 00:08:54.885 clat (usec): min=3478, max=69503, avg=22571.23, stdev=6610.66 00:08:54.885 lat (usec): min=3510, max=69523, avg=22722.84, stdev=6658.84 00:08:54.885 clat percentiles (usec): 00:08:54.885 | 1.00th=[ 5669], 5.00th=[ 8979], 10.00th=[13173], 20.00th=[19268], 00:08:54.885 | 30.00th=[20055], 40.00th=[22676], 50.00th=[23725], 60.00th=[25035], 00:08:54.885 | 70.00th=[25297], 80.00th=[25560], 90.00th=[27132], 95.00th=[32375], 00:08:54.885 | 99.00th=[42206], 99.50th=[47449], 99.90th=[62129], 99.95th=[69731], 00:08:54.885 | 99.99th=[69731] 00:08:54.885 bw ( KiB/s): min= 9800, max=12272, per=18.07%, avg=11036.00, stdev=1747.97, samples=2 00:08:54.885 iops : min= 2450, max= 3068, avg=2759.00, stdev=436.99, samples=2 00:08:54.885 lat (msec) : 4=0.11%, 10=3.75%, 20=37.24%, 50=53.40%, 100=5.51% 00:08:54.885 cpu : usr=2.75%, sys=6.59%, ctx=361, majf=0, minf=13 00:08:54.885 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:08:54.885 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:54.885 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:54.885 issued rwts: total=2560,2886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:54.885 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:54.885 00:08:54.885 Run status group 0 (all jobs): 00:08:54.885 READ: bw=53.8MiB/s (56.4MB/s), 9.82MiB/s-18.2MiB/s (10.3MB/s-19.1MB/s), io=54.7MiB (57.4MB), run=1005-1018msec 00:08:54.885 WRITE: bw=59.6MiB/s (62.5MB/s), 11.1MiB/s-19.9MiB/s (11.6MB/s-20.9MB/s), io=60.7MiB (63.7MB), run=1005-1018msec 00:08:54.885 00:08:54.885 Disk stats (read/write): 00:08:54.885 nvme0n1: ios=2098/2543, merge=0/0, ticks=43852/61427, in_queue=105279, util=88.18% 00:08:54.885 nvme0n2: ios=4334/4608, merge=0/0, ticks=27788/26867, in_queue=54655, util=89.59% 00:08:54.885 nvme0n3: ios=3794/4096, merge=0/0, ticks=47915/45193, in_queue=93108, util=89.71% 00:08:54.885 nvme0n4: ios=2048/2503, merge=0/0, ticks=50376/54799, in_queue=105175, util=89.86% 00:08:54.885 09:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:54.885 09:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69375 00:08:54.885 09:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:54.885 09:02:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:54.885 [global] 00:08:54.885 thread=1 00:08:54.885 invalidate=1 00:08:54.885 rw=read 00:08:54.885 time_based=1 00:08:54.885 runtime=10 00:08:54.885 ioengine=libaio 00:08:54.885 direct=1 00:08:54.885 bs=4096 00:08:54.885 iodepth=1 00:08:54.885 norandommap=1 00:08:54.885 numjobs=1 00:08:54.885 00:08:54.885 [job0] 00:08:54.885 filename=/dev/nvme0n1 00:08:54.885 [job1] 00:08:54.885 filename=/dev/nvme0n2 00:08:54.885 [job2] 00:08:54.885 filename=/dev/nvme0n3 00:08:54.885 [job3] 00:08:54.885 filename=/dev/nvme0n4 00:08:54.885 Could not set queue depth (nvme0n1) 00:08:54.885 Could not set queue depth (nvme0n2) 00:08:54.885 Could not set queue depth (nvme0n3) 00:08:54.885 Could not set queue depth (nvme0n4) 00:08:54.885 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.885 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.885 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.885 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.885 fio-3.35 00:08:54.885 Starting 4 threads 00:08:58.169 09:02:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:58.169 fio: pid=69424, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:58.169 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=63524864, buflen=4096 00:08:58.169 09:02:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:58.426 fio: pid=69423, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:58.426 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=67620864, buflen=4096 00:08:58.426 09:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.426 09:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:58.687 fio: pid=69421, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:58.687 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8474624, buflen=4096 00:08:58.687 09:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.687 09:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:58.945 fio: pid=69422, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:58.945 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19156992, buflen=4096 00:08:58.945 00:08:58.945 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69421: Wed Nov 20 09:02:37 2024 00:08:58.945 read: IOPS=5201, BW=20.3MiB/s (21.3MB/s)(72.1MiB/3548msec) 00:08:58.946 slat (usec): min=12, max=9826, avg=17.14, stdev=133.44 00:08:58.946 clat (usec): min=130, max=199833, avg=173.67, stdev=1470.08 00:08:58.946 lat (usec): min=144, max=199847, avg=190.80, stdev=1476.13 00:08:58.946 clat percentiles (usec): 00:08:58.946 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:08:58.946 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:08:58.946 | 70.00th=[ 167], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:08:58.946 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 302], 99.95th=[ 461], 00:08:58.946 | 99.99th=[ 1844] 00:08:58.946 bw ( KiB/s): min=21344, max=22464, per=30.30%, avg=22126.67, stdev=430.05, samples=6 00:08:58.946 iops : min= 5336, max= 5616, avg=5531.67, stdev=107.51, samples=6 00:08:58.946 lat (usec) : 250=99.86%, 500=0.09%, 750=0.02%, 1000=0.01% 00:08:58.946 lat (msec) : 2=0.02%, 250=0.01% 00:08:58.946 cpu : usr=1.30%, sys=6.60%, ctx=18459, majf=0, minf=1 00:08:58.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 issued rwts: total=18454,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.946 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69422: Wed Nov 20 09:02:37 2024 00:08:58.946 read: IOPS=5375, BW=21.0MiB/s (22.0MB/s)(82.3MiB/3918msec) 00:08:58.946 slat (usec): min=12, max=10794, avg=18.70, stdev=133.71 00:08:58.946 clat (usec): min=128, max=4146, avg=165.91, stdev=59.03 00:08:58.946 lat (usec): min=140, max=10987, avg=184.61, stdev=147.39 00:08:58.946 clat percentiles (usec): 00:08:58.946 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 153], 00:08:58.946 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:08:58.946 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 188], 00:08:58.946 | 99.00th=[ 247], 99.50th=[ 289], 99.90th=[ 906], 99.95th=[ 1385], 00:08:58.946 | 99.99th=[ 1942] 00:08:58.946 bw ( KiB/s): min=20045, max=22024, per=29.22%, avg=21341.29, stdev=715.70, samples=7 00:08:58.946 iops : min= 5011, max= 5506, avg=5335.29, stdev=179.00, samples=7 00:08:58.946 lat (usec) : 250=99.11%, 500=0.65%, 750=0.07%, 1000=0.07% 00:08:58.946 lat (msec) : 2=0.08%, 4=0.01%, 10=0.01% 00:08:58.946 cpu : usr=1.48%, sys=7.33%, ctx=21081, majf=0, minf=1 00:08:58.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 issued rwts: total=21062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.946 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69423: Wed Nov 20 09:02:37 2024 00:08:58.946 read: IOPS=5072, BW=19.8MiB/s (20.8MB/s)(64.5MiB/3255msec) 00:08:58.946 slat (usec): min=12, max=13171, avg=16.46, stdev=118.99 00:08:58.946 clat (usec): min=142, max=5430, avg=179.25, stdev=135.56 00:08:58.946 lat (usec): min=156, max=13374, avg=195.71, stdev=180.85 00:08:58.946 clat percentiles (usec): 00:08:58.946 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:08:58.946 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:08:58.946 | 70.00th=[ 180], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:08:58.946 | 99.00th=[ 204], 99.50th=[ 217], 99.90th=[ 3195], 99.95th=[ 3884], 00:08:58.946 | 99.99th=[ 5211] 00:08:58.946 bw ( KiB/s): min=19536, max=21232, per=27.88%, avg=20357.33, stdev=611.96, samples=6 00:08:58.946 iops : min= 4884, max= 5308, avg=5089.33, stdev=152.99, samples=6 00:08:58.946 lat (usec) : 250=99.67%, 500=0.10%, 750=0.04%, 1000=0.01% 00:08:58.946 lat (msec) : 2=0.03%, 4=0.10%, 10=0.04% 00:08:58.946 cpu : usr=1.72%, sys=6.02%, ctx=16512, majf=0, minf=2 00:08:58.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 issued rwts: total=16510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.946 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69424: Wed Nov 20 09:02:37 2024 00:08:58.946 read: IOPS=5199, BW=20.3MiB/s (21.3MB/s)(60.6MiB/2983msec) 00:08:58.946 slat (nsec): min=12240, max=85143, avg=14428.52, stdev=2397.26 00:08:58.946 clat (usec): min=148, max=2145, avg=176.49, stdev=24.46 00:08:58.946 lat (usec): min=161, max=2170, avg=190.92, stdev=24.72 00:08:58.946 clat percentiles (usec): 00:08:58.946 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:08:58.946 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:08:58.946 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 196], 00:08:58.946 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 404], 99.95th=[ 603], 00:08:58.946 | 99.99th=[ 1106] 00:08:58.946 bw ( KiB/s): min=20624, max=21120, per=28.60%, avg=20888.00, stdev=187.96, samples=5 00:08:58.946 iops : min= 5156, max= 5280, avg=5222.00, stdev=46.99, samples=5 00:08:58.946 lat (usec) : 250=99.84%, 500=0.08%, 750=0.06%, 1000=0.01% 00:08:58.946 lat (msec) : 2=0.01%, 4=0.01% 00:08:58.946 cpu : usr=1.04%, sys=6.51%, ctx=15514, majf=0, minf=2 00:08:58.946 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:58.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:58.946 issued rwts: total=15510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:58.946 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:58.946 00:08:58.946 Run status group 0 (all jobs): 00:08:58.946 READ: bw=71.3MiB/s (74.8MB/s), 19.8MiB/s-21.0MiB/s (20.8MB/s-22.0MB/s), io=279MiB (293MB), run=2983-3918msec 00:08:58.946 00:08:58.946 Disk stats (read/write): 00:08:58.946 nvme0n1: ios=18428/0, merge=0/0, ticks=3068/0, in_queue=3068, util=95.39% 00:08:58.946 nvme0n2: ios=20742/0, merge=0/0, ticks=3521/0, in_queue=3521, util=95.84% 00:08:58.946 nvme0n3: ios=15757/0, merge=0/0, ticks=2833/0, in_queue=2833, util=95.40% 00:08:58.946 nvme0n4: ios=14922/0, merge=0/0, ticks=2671/0, in_queue=2671, util=96.76% 00:08:58.946 09:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:58.946 09:02:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:59.231 09:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.231 09:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:59.798 09:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:59.798 09:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:00.056 09:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:00.056 09:02:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:00.314 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:00.314 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69375 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:00.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:00.572 nvmf hotplug test: fio failed as expected 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:00.572 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:01.138 rmmod nvme_tcp 00:09:01.138 rmmod nvme_fabrics 00:09:01.138 rmmod nvme_keyring 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 68869 ']' 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 68869 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 68869 ']' 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 68869 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68869 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.138 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.138 killing process with pid 68869 00:09:01.139 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68869' 00:09:01.139 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 68869 00:09:01.139 09:02:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 68869 00:09:01.139 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:01.139 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:09:01.139 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:09:01.139 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:09:01.139 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:01.139 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:01.139 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:09:01.397 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:09:01.397 00:09:01.398 real 0m21.313s 00:09:01.398 user 1m20.673s 00:09:01.398 sys 0m10.083s 00:09:01.398 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.398 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.398 ************************************ 00:09:01.398 END TEST nvmf_fio_target 00:09:01.398 ************************************ 00:09:01.398 09:02:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:01.398 09:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:01.398 09:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.398 09:02:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:01.398 ************************************ 00:09:01.398 START TEST nvmf_bdevio 00:09:01.398 ************************************ 00:09:01.398 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:01.656 * Looking for test storage... 00:09:01.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:01.656 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.657 --rc genhtml_branch_coverage=1 00:09:01.657 --rc genhtml_function_coverage=1 00:09:01.657 --rc genhtml_legend=1 00:09:01.657 --rc geninfo_all_blocks=1 00:09:01.657 --rc geninfo_unexecuted_blocks=1 00:09:01.657 00:09:01.657 ' 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.657 --rc genhtml_branch_coverage=1 00:09:01.657 --rc genhtml_function_coverage=1 00:09:01.657 --rc genhtml_legend=1 00:09:01.657 --rc geninfo_all_blocks=1 00:09:01.657 --rc geninfo_unexecuted_blocks=1 00:09:01.657 00:09:01.657 ' 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.657 --rc genhtml_branch_coverage=1 00:09:01.657 --rc genhtml_function_coverage=1 00:09:01.657 --rc genhtml_legend=1 00:09:01.657 --rc geninfo_all_blocks=1 00:09:01.657 --rc geninfo_unexecuted_blocks=1 00:09:01.657 00:09:01.657 ' 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:01.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:01.657 --rc genhtml_branch_coverage=1 00:09:01.657 --rc genhtml_function_coverage=1 00:09:01.657 --rc genhtml_legend=1 00:09:01.657 --rc geninfo_all_blocks=1 00:09:01.657 --rc geninfo_unexecuted_blocks=1 00:09:01.657 00:09:01.657 ' 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:09:01.657 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:01.658 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@280 -- # nvmf_veth_init 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@223 -- # create_target_ns 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # create_main_bridge 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@105 -- # delete_main_bridge 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator0 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target0 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:09:01.658 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0 up 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target0_br 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target0 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:09:01.659 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:09:01.918 10.0.0.1 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:09:01.918 10.0.0.2 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator0 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target0_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:09:01.918 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1 up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772163 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:09:01.919 10.0.0.3 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772164 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:09:01.919 10.0.0.4 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target1_br 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 2 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:01.919 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:01.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:01.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:09:01.920 00:09:01.920 --- 10.0.0.1 ping statistics --- 00:09:01.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:01.920 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:01.920 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:09:02.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:09:02.179 00:09:02.179 --- 10.0.0.2 ping statistics --- 00:09:02.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.179 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:02.179 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:09:02.180 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:02.180 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:09:02.180 00:09:02.180 --- 10.0.0.3 ping statistics --- 00:09:02.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.180 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:09:02.180 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:02.180 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:09:02.180 00:09:02.180 --- 10.0.0.4 ping statistics --- 00:09:02.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.180 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # return 0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:02.180 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=69807 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 69807 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 69807 ']' 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.181 09:02:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:02.181 [2024-11-20 09:02:41.032800] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:02.181 [2024-11-20 09:02:41.032945] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.440 [2024-11-20 09:02:41.183994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:02.440 [2024-11-20 09:02:41.257015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.440 [2024-11-20 09:02:41.257106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.440 [2024-11-20 09:02:41.257121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.440 [2024-11-20 09:02:41.257131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.440 [2024-11-20 09:02:41.257141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.440 [2024-11-20 09:02:41.258788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:02.440 [2024-11-20 09:02:41.258955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:02.440 [2024-11-20 09:02:41.259081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:02.440 [2024-11-20 09:02:41.259481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 [2024-11-20 09:02:42.063868] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 Malloc0 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.375 [2024-11-20 09:02:42.124749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:03.375 { 00:09:03.375 "params": { 00:09:03.375 "name": "Nvme$subsystem", 00:09:03.375 "trtype": "$TEST_TRANSPORT", 00:09:03.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.375 "adrfam": "ipv4", 00:09:03.375 "trsvcid": "$NVMF_PORT", 00:09:03.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.375 "hdgst": ${hdgst:-false}, 00:09:03.375 "ddgst": ${ddgst:-false} 00:09:03.375 }, 00:09:03.375 "method": "bdev_nvme_attach_controller" 00:09:03.375 } 00:09:03.375 EOF 00:09:03.375 )") 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:09:03.375 09:02:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:03.375 "params": { 00:09:03.375 "name": "Nvme1", 00:09:03.375 "trtype": "tcp", 00:09:03.375 "traddr": "10.0.0.2", 00:09:03.375 "adrfam": "ipv4", 00:09:03.375 "trsvcid": "4420", 00:09:03.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.375 "hdgst": false, 00:09:03.375 "ddgst": false 00:09:03.375 }, 00:09:03.375 "method": "bdev_nvme_attach_controller" 00:09:03.375 }' 00:09:03.375 [2024-11-20 09:02:42.184795] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:03.375 [2024-11-20 09:02:42.184881] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69861 ] 00:09:03.633 [2024-11-20 09:02:42.336417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:03.633 [2024-11-20 09:02:42.410367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.633 [2024-11-20 09:02:42.410488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.633 [2024-11-20 09:02:42.410826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.890 I/O targets: 00:09:03.890 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:03.890 00:09:03.890 00:09:03.890 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.890 http://cunit.sourceforge.net/ 00:09:03.890 00:09:03.890 00:09:03.890 Suite: bdevio tests on: Nvme1n1 00:09:03.890 Test: blockdev write read block ...passed 00:09:03.890 Test: blockdev write zeroes read block ...passed 00:09:03.890 Test: blockdev write zeroes read no split ...passed 00:09:03.890 Test: blockdev write zeroes read split ...passed 00:09:03.890 Test: blockdev write zeroes read split partial ...passed 00:09:03.890 Test: blockdev reset ...[2024-11-20 09:02:42.712304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:03.890 [2024-11-20 09:02:42.712537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6f50 (9): Bad file descriptor 00:09:03.890 [2024-11-20 09:02:42.725340] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:03.890 passed 00:09:03.890 Test: blockdev write read 8 blocks ...passed 00:09:03.890 Test: blockdev write read size > 128k ...passed 00:09:03.890 Test: blockdev write read invalid size ...passed 00:09:03.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:03.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:03.890 Test: blockdev write read max offset ...passed 00:09:04.149 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:04.149 Test: blockdev writev readv 8 blocks ...passed 00:09:04.149 Test: blockdev writev readv 30 x 1block ...passed 00:09:04.149 Test: blockdev writev readv block ...passed 00:09:04.149 Test: blockdev writev readv size > 128k ...passed 00:09:04.149 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:04.149 Test: blockdev comparev and writev ...[2024-11-20 09:02:42.898939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.898997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.899017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.899028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.899441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.899467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.899485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.899495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.899812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.899834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.899850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.899861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.900332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.900356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.900373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:04.149 [2024-11-20 09:02:42.900383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:04.149 passed 00:09:04.149 Test: blockdev nvme passthru rw ...passed 00:09:04.149 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:02:42.984190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:04.149 [2024-11-20 09:02:42.984259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.984414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:04.149 [2024-11-20 09:02:42.984453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.984586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:04.149 [2024-11-20 09:02:42.984623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:04.149 [2024-11-20 09:02:42.984750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:04.149 [2024-11-20 09:02:42.984795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:04.149 passed 00:09:04.149 Test: blockdev nvme admin passthru ...passed 00:09:04.149 Test: blockdev copy ...passed 00:09:04.149 00:09:04.149 Run Summary: Type Total Ran Passed Failed Inactive 00:09:04.149 suites 1 1 n/a 0 0 00:09:04.149 tests 23 23 23 0 0 00:09:04.149 asserts 152 152 152 0 n/a 00:09:04.149 00:09:04.149 Elapsed time = 0.893 seconds 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:04.408 rmmod nvme_tcp 00:09:04.408 rmmod nvme_fabrics 00:09:04.408 rmmod nvme_keyring 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 69807 ']' 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 69807 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 69807 ']' 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 69807 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.408 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69807 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69807' 00:09:04.666 killing process with pid 69807 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 69807 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 69807 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:04.666 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:09:04.925 00:09:04.925 real 0m3.475s 00:09:04.925 user 0m11.495s 00:09:04.925 sys 0m0.944s 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.925 ************************************ 00:09:04.925 END TEST nvmf_bdevio 00:09:04.925 ************************************ 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.925 ************************************ 00:09:04.925 START TEST nvmf_target_multipath 00:09:04.925 ************************************ 00:09:04.925 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:05.185 * Looking for test storage... 00:09:05.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.185 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.185 --rc genhtml_branch_coverage=1 00:09:05.185 --rc genhtml_function_coverage=1 00:09:05.186 --rc genhtml_legend=1 00:09:05.186 --rc geninfo_all_blocks=1 00:09:05.186 --rc geninfo_unexecuted_blocks=1 00:09:05.186 00:09:05.186 ' 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.186 --rc genhtml_branch_coverage=1 00:09:05.186 --rc genhtml_function_coverage=1 00:09:05.186 --rc genhtml_legend=1 00:09:05.186 --rc geninfo_all_blocks=1 00:09:05.186 --rc geninfo_unexecuted_blocks=1 00:09:05.186 00:09:05.186 ' 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.186 --rc genhtml_branch_coverage=1 00:09:05.186 --rc genhtml_function_coverage=1 00:09:05.186 --rc genhtml_legend=1 00:09:05.186 --rc geninfo_all_blocks=1 00:09:05.186 --rc geninfo_unexecuted_blocks=1 00:09:05.186 00:09:05.186 ' 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.186 --rc genhtml_branch_coverage=1 00:09:05.186 --rc genhtml_function_coverage=1 00:09:05.186 --rc genhtml_legend=1 00:09:05.186 --rc geninfo_all_blocks=1 00:09:05.186 --rc geninfo_unexecuted_blocks=1 00:09:05.186 00:09:05.186 ' 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:05.186 09:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:05.186 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:09:05.186 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:09:05.187 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:09:05.449 10.0.0.1 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:09:05.449 10.0.0.2 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:09:05.449 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:09:05.450 10.0.0.3 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:09:05.450 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:09:05.451 10.0.0.4 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:05.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:09:05.451 00:09:05.451 --- 10.0.0.1 ping statistics --- 00:09:05.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.451 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:05.451 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:09:05.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:09:05.452 00:09:05.452 --- 10.0.0.2 ping statistics --- 00:09:05.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.452 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:05.452 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:09:05.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:05.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:09:05.714 00:09:05.714 --- 10.0.0.3 ping statistics --- 00:09:05.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.714 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:09:05.714 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:05.714 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:09:05.714 00:09:05.714 --- 10.0.0.4 ping statistics --- 00:09:05.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.714 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # return 0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.714 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # nvmfappstart -m 0xF 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # nvmfpid=70100 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # waitforlisten 70100 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 70100 ']' 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.715 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:05.715 [2024-11-20 09:02:44.540318] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:05.715 [2024-11-20 09:02:44.540442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.973 [2024-11-20 09:02:44.697242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.973 [2024-11-20 09:02:44.768105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.973 [2024-11-20 09:02:44.768166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.973 [2024-11-20 09:02:44.768181] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.973 [2024-11-20 09:02:44.768192] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.973 [2024-11-20 09:02:44.768202] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.973 [2024-11-20 09:02:44.769386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.973 [2024-11-20 09:02:44.769522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.973 [2024-11-20 09:02:44.769625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.973 [2024-11-20 09:02:44.769627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.232 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.232 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:06.232 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:06.232 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.232 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:06.232 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.232 09:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:06.490 [2024-11-20 09:02:45.258993] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.490 09:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:06.748 Malloc0 00:09:06.748 09:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:07.006 09:02:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.573 09:02:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.573 [2024-11-20 09:02:46.423667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.573 09:02:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:07.831 [2024-11-20 09:02:46.687949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:07.831 09:02:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:09:08.088 09:02:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:08.353 09:02:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:08.353 09:02:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:08.353 09:02:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:08.353 09:02:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:08.353 09:02:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@66 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@66 -- # subsystem=nvme-subsys0 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # paths=("${paths[@]##*/}") 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@70 -- # (( 2 == 2 )) 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # p0=nvme0c0n1 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # p1=nvme0c1n1 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@75 -- # check_ana_state nvme0c0n1 optimized 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # check_ana_state nvme0c1n1 optimized 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # echo numa 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # fio_pid=70230 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:10.258 09:02:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@84 -- # sleep 1 00:09:10.258 [global] 00:09:10.258 thread=1 00:09:10.258 invalidate=1 00:09:10.258 rw=randrw 00:09:10.258 time_based=1 00:09:10.258 runtime=6 00:09:10.258 ioengine=libaio 00:09:10.258 direct=1 00:09:10.258 bs=4096 00:09:10.258 iodepth=128 00:09:10.258 norandommap=0 00:09:10.258 numjobs=1 00:09:10.258 00:09:10.258 verify_dump=1 00:09:10.258 verify_backlog=512 00:09:10.258 verify_state_save=0 00:09:10.258 do_verify=1 00:09:10.258 verify=crc32c-intel 00:09:10.516 [job0] 00:09:10.516 filename=/dev/nvme0n1 00:09:10.516 Could not set queue depth (nvme0n1) 00:09:10.516 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:10.516 fio-3.35 00:09:10.516 Starting 1 thread 00:09:11.451 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:11.710 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@89 -- # check_ana_state nvme0c0n1 inaccessible 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # check_ana_state nvme0c1n1 non-optimized 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:11.968 09:02:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:12.914 09:02:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:12.914 09:02:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:12.914 09:02:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:12.914 09:02:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:13.187 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 non-optimized 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 inaccessible 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:13.445 09:02:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:14.821 09:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:14.821 09:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:14.821 09:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:14.821 09:02:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # wait 70230 00:09:16.723 00:09:16.723 job0: (groupid=0, jobs=1): err= 0: pid=70255: Wed Nov 20 09:02:55 2024 00:09:16.723 read: IOPS=10.9k, BW=42.6MiB/s (44.6MB/s)(256MiB/6006msec) 00:09:16.723 slat (usec): min=2, max=6738, avg=52.67, stdev=243.43 00:09:16.723 clat (usec): min=413, max=50222, avg=8042.30, stdev=1273.47 00:09:16.724 lat (usec): min=435, max=50229, avg=8094.97, stdev=1283.97 00:09:16.724 clat percentiles (usec): 00:09:16.724 | 1.00th=[ 4883], 5.00th=[ 6194], 10.00th=[ 6849], 20.00th=[ 7242], 00:09:16.724 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:09:16.724 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10290], 00:09:16.724 | 99.00th=[11994], 99.50th=[12387], 99.90th=[13042], 99.95th=[13566], 00:09:16.724 | 99.99th=[13960] 00:09:16.724 bw ( KiB/s): min= 7744, max=28184, per=51.69%, avg=22538.91, stdev=6795.81, samples=11 00:09:16.724 iops : min= 1936, max= 7046, avg=5634.73, stdev=1698.95, samples=11 00:09:16.724 write: IOPS=6442, BW=25.2MiB/s (26.4MB/s)(134MiB/5314msec); 0 zone resets 00:09:16.724 slat (usec): min=4, max=3023, avg=63.56, stdev=163.62 00:09:16.724 clat (usec): min=377, max=13692, avg=6869.85, stdev=1023.80 00:09:16.724 lat (usec): min=463, max=13730, avg=6933.41, stdev=1027.69 00:09:16.724 clat percentiles (usec): 00:09:16.724 | 1.00th=[ 3851], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 6325], 00:09:16.724 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7111], 00:09:16.724 | 70.00th=[ 7308], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8160], 00:09:16.724 | 99.00th=[10159], 99.50th=[10683], 99.90th=[12387], 99.95th=[12649], 00:09:16.724 | 99.99th=[13042] 00:09:16.724 bw ( KiB/s): min= 8192, max=28088, per=87.60%, avg=22573.09, stdev=6509.17, samples=11 00:09:16.724 iops : min= 2048, max= 7022, avg=5643.27, stdev=1627.29, samples=11 00:09:16.724 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:16.724 lat (msec) : 2=0.03%, 4=0.55%, 10=95.29%, 20=4.11%, 50=0.01% 00:09:16.724 lat (msec) : 100=0.01% 00:09:16.724 cpu : usr=5.81%, sys=22.00%, ctx=6531, majf=0, minf=90 00:09:16.724 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:16.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.724 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.724 issued rwts: total=65466,34234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.724 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.724 00:09:16.724 Run status group 0 (all jobs): 00:09:16.724 READ: bw=42.6MiB/s (44.6MB/s), 42.6MiB/s-42.6MiB/s (44.6MB/s-44.6MB/s), io=256MiB (268MB), run=6006-6006msec 00:09:16.724 WRITE: bw=25.2MiB/s (26.4MB/s), 25.2MiB/s-25.2MiB/s (26.4MB/s-26.4MB/s), io=134MiB (140MB), run=5314-5314msec 00:09:16.724 00:09:16.724 Disk stats (read/write): 00:09:16.724 nvme0n1: ios=64766/33335, merge=0/0, ticks=488544/214458, in_queue=703002, util=98.70% 00:09:16.724 09:02:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:09:16.982 09:02:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@103 -- # check_ana_state nvme0c0n1 optimized 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # check_ana_state nvme0c1n1 optimized 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:09:17.241 09:02:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:18.245 09:02:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:18.245 09:02:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:18.245 09:02:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:18.245 09:02:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # echo round-robin 00:09:18.245 09:02:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # fio_pid=70385 00:09:18.245 09:02:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:18.245 09:02:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@112 -- # sleep 1 00:09:18.245 [global] 00:09:18.245 thread=1 00:09:18.245 invalidate=1 00:09:18.245 rw=randrw 00:09:18.245 time_based=1 00:09:18.245 runtime=6 00:09:18.245 ioengine=libaio 00:09:18.245 direct=1 00:09:18.245 bs=4096 00:09:18.245 iodepth=128 00:09:18.245 norandommap=0 00:09:18.245 numjobs=1 00:09:18.245 00:09:18.245 verify_dump=1 00:09:18.245 verify_backlog=512 00:09:18.245 verify_state_save=0 00:09:18.245 do_verify=1 00:09:18.245 verify=crc32c-intel 00:09:18.245 [job0] 00:09:18.245 filename=/dev/nvme0n1 00:09:18.245 Could not set queue depth (nvme0n1) 00:09:18.504 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:18.504 fio-3.35 00:09:18.504 Starting 1 thread 00:09:19.440 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@114 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:09:19.699 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@117 -- # check_ana_state nvme0c0n1 inaccessible 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # check_ana_state nvme0c1n1 non-optimized 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:19.958 09:02:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:21.022 09:02:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:21.022 09:02:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.022 09:02:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.022 09:02:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:09:21.281 09:02:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 non-optimized 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 inaccessible 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:21.539 09:03:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:22.474 09:03:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:22.474 09:03:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:22.474 09:03:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:22.474 09:03:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # wait 70385 00:09:25.003 00:09:25.003 job0: (groupid=0, jobs=1): err= 0: pid=70407: Wed Nov 20 09:03:03 2024 00:09:25.003 read: IOPS=12.3k, BW=47.9MiB/s (50.3MB/s)(288MiB/6003msec) 00:09:25.003 slat (usec): min=4, max=5731, avg=41.82, stdev=210.96 00:09:25.003 clat (usec): min=319, max=15932, avg=7288.00, stdev=1685.67 00:09:25.003 lat (usec): min=340, max=15940, avg=7329.82, stdev=1703.87 00:09:25.003 clat percentiles (usec): 00:09:25.003 | 1.00th=[ 3064], 5.00th=[ 4228], 10.00th=[ 4883], 20.00th=[ 5932], 00:09:25.003 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7701], 00:09:25.003 | 70.00th=[ 8029], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[ 9634], 00:09:25.003 | 99.00th=[11600], 99.50th=[12125], 99.90th=[13435], 99.95th=[14353], 00:09:25.003 | 99.99th=[15270] 00:09:25.003 bw ( KiB/s): min=15536, max=40664, per=53.87%, avg=26438.55, stdev=8040.60, samples=11 00:09:25.003 iops : min= 3884, max=10166, avg=6609.64, stdev=2010.15, samples=11 00:09:25.003 write: IOPS=7200, BW=28.1MiB/s (29.5MB/s)(147MiB/5223msec); 0 zone resets 00:09:25.003 slat (usec): min=12, max=3328, avg=51.08, stdev=137.94 00:09:25.003 clat (usec): min=502, max=15928, avg=5990.26, stdev=1621.80 00:09:25.003 lat (usec): min=549, max=15954, avg=6041.33, stdev=1636.89 00:09:25.003 clat percentiles (usec): 00:09:25.003 | 1.00th=[ 2540], 5.00th=[ 3163], 10.00th=[ 3621], 20.00th=[ 4293], 00:09:25.003 | 30.00th=[ 5014], 40.00th=[ 5932], 50.00th=[ 6456], 60.00th=[ 6783], 00:09:25.003 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7635], 95.00th=[ 7963], 00:09:25.003 | 99.00th=[ 9634], 99.50th=[10421], 99.90th=[11994], 99.95th=[12387], 00:09:25.003 | 99.99th=[12911] 00:09:25.003 bw ( KiB/s): min=16384, max=40088, per=91.67%, avg=26402.64, stdev=7815.20, samples=11 00:09:25.003 iops : min= 4096, max=10022, avg=6600.64, stdev=1953.83, samples=11 00:09:25.003 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.04% 00:09:25.003 lat (msec) : 2=0.19%, 4=7.49%, 10=89.40%, 20=2.85% 00:09:25.003 cpu : usr=5.68%, sys=23.33%, ctx=7161, majf=0, minf=102 00:09:25.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:25.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.003 issued rwts: total=73654,37607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.003 00:09:25.003 Run status group 0 (all jobs): 00:09:25.003 READ: bw=47.9MiB/s (50.3MB/s), 47.9MiB/s-47.9MiB/s (50.3MB/s-50.3MB/s), io=288MiB (302MB), run=6003-6003msec 00:09:25.003 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=147MiB (154MB), run=5223-5223msec 00:09:25.003 00:09:25.003 Disk stats (read/write): 00:09:25.003 nvme0n1: ios=72044/37607, merge=0/0, ticks=492547/209213, in_queue=701760, util=98.63% 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@128 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:25.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@133 -- # rm -f ./local-job0-0-verify.state 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # rm -f ./local-job1-1-verify.state 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@136 -- # trap - SIGINT SIGTERM EXIT 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@138 -- # nvmftestfini 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:25.003 rmmod nvme_tcp 00:09:25.003 rmmod nvme_fabrics 00:09:25.003 rmmod nvme_keyring 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n 70100 ']' 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@337 -- # killprocess 70100 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 70100 ']' 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 70100 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70100 00:09:25.003 killing process with pid 70100 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70100' 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 70100 00:09:25.003 09:03:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 70100 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:09:25.263 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:09:25.523 ************************************ 00:09:25.523 END TEST nvmf_target_multipath 00:09:25.523 ************************************ 00:09:25.523 00:09:25.523 real 0m20.535s 00:09:25.523 user 1m19.903s 00:09:25.523 sys 0m6.241s 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 ************************************ 00:09:25.523 START TEST nvmf_zcopy 00:09:25.523 ************************************ 00:09:25.523 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.783 * Looking for test storage... 00:09:25.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.783 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.784 --rc genhtml_branch_coverage=1 00:09:25.784 --rc genhtml_function_coverage=1 00:09:25.784 --rc genhtml_legend=1 00:09:25.784 --rc geninfo_all_blocks=1 00:09:25.784 --rc geninfo_unexecuted_blocks=1 00:09:25.784 00:09:25.784 ' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.784 --rc genhtml_branch_coverage=1 00:09:25.784 --rc genhtml_function_coverage=1 00:09:25.784 --rc genhtml_legend=1 00:09:25.784 --rc geninfo_all_blocks=1 00:09:25.784 --rc geninfo_unexecuted_blocks=1 00:09:25.784 00:09:25.784 ' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.784 --rc genhtml_branch_coverage=1 00:09:25.784 --rc genhtml_function_coverage=1 00:09:25.784 --rc genhtml_legend=1 00:09:25.784 --rc geninfo_all_blocks=1 00:09:25.784 --rc geninfo_unexecuted_blocks=1 00:09:25.784 00:09:25.784 ' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.784 --rc genhtml_branch_coverage=1 00:09:25.784 --rc genhtml_function_coverage=1 00:09:25.784 --rc genhtml_legend=1 00:09:25.784 --rc geninfo_all_blocks=1 00:09:25.784 --rc geninfo_unexecuted_blocks=1 00:09:25.784 00:09:25.784 ' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:25.784 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@280 -- # nvmf_veth_init 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:09:25.784 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@223 -- # create_target_ns 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # create_main_bridge 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@105 -- # delete_main_bridge 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator0 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target0 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0 up 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target0_br 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target0 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:09:25.785 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:09:26.045 10.0.0.1 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:09:26.045 10.0.0.2 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator0 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target0_br 00:09:26.045 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator1 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target1 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1 up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target1 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772163 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:09:26.046 10.0.0.3 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772164 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:09:26.046 10.0.0.4 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator1 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target1_br 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:09:26.046 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 2 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:26.047 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.047 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:09:26.047 00:09:26.047 --- 10.0.0.1 ping statistics --- 00:09:26.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.047 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:09:26.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:09:26.047 00:09:26.047 --- 10.0.0.2 ping statistics --- 00:09:26.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.047 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:26.047 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.306 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:09:26.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:26.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:09:26.307 00:09:26.307 --- 10.0.0.3 ping statistics --- 00:09:26.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.307 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:09:26.307 09:03:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:09:26.307 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:26.307 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:09:26.307 00:09:26.307 --- 10.0.0.4 ping statistics --- 00:09:26.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.307 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # return 0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:26.307 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=70735 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 70735 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 70735 ']' 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.308 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.308 [2024-11-20 09:03:05.159943] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:26.308 [2024-11-20 09:03:05.160027] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.567 [2024-11-20 09:03:05.306365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.567 [2024-11-20 09:03:05.364341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.567 [2024-11-20 09:03:05.364387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.567 [2024-11-20 09:03:05.364397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.567 [2024-11-20 09:03:05.364405] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.567 [2024-11-20 09:03:05.364412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.567 [2024-11-20 09:03:05.364807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 [2024-11-20 09:03:05.539446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 [2024-11-20 09:03:05.555583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 malloc0 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:26.825 { 00:09:26.825 "params": { 00:09:26.825 "name": "Nvme$subsystem", 00:09:26.825 "trtype": "$TEST_TRANSPORT", 00:09:26.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.825 "adrfam": "ipv4", 00:09:26.825 "trsvcid": "$NVMF_PORT", 00:09:26.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.825 "hdgst": ${hdgst:-false}, 00:09:26.825 "ddgst": ${ddgst:-false} 00:09:26.825 }, 00:09:26.825 "method": "bdev_nvme_attach_controller" 00:09:26.825 } 00:09:26.825 EOF 00:09:26.825 )") 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:09:26.825 09:03:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:26.825 "params": { 00:09:26.825 "name": "Nvme1", 00:09:26.825 "trtype": "tcp", 00:09:26.825 "traddr": "10.0.0.2", 00:09:26.825 "adrfam": "ipv4", 00:09:26.825 "trsvcid": "4420", 00:09:26.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.825 "hdgst": false, 00:09:26.825 "ddgst": false 00:09:26.825 }, 00:09:26.825 "method": "bdev_nvme_attach_controller" 00:09:26.825 }' 00:09:26.825 [2024-11-20 09:03:05.655417] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:26.825 [2024-11-20 09:03:05.655555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70777 ] 00:09:27.094 [2024-11-20 09:03:05.802632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.094 [2024-11-20 09:03:05.867128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.355 Running I/O for 10 seconds... 00:09:29.252 5813.00 IOPS, 45.41 MiB/s [2024-11-20T09:03:09.106Z] 5873.00 IOPS, 45.88 MiB/s [2024-11-20T09:03:10.492Z] 5901.00 IOPS, 46.10 MiB/s [2024-11-20T09:03:11.427Z] 6004.00 IOPS, 46.91 MiB/s [2024-11-20T09:03:12.364Z] 6112.00 IOPS, 47.75 MiB/s [2024-11-20T09:03:13.299Z] 6158.17 IOPS, 48.11 MiB/s [2024-11-20T09:03:14.234Z] 6199.43 IOPS, 48.43 MiB/s [2024-11-20T09:03:15.170Z] 6238.00 IOPS, 48.73 MiB/s [2024-11-20T09:03:16.107Z] 6263.56 IOPS, 48.93 MiB/s [2024-11-20T09:03:16.107Z] 6289.90 IOPS, 49.14 MiB/s 00:09:37.188 Latency(us) 00:09:37.188 [2024-11-20T09:03:16.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.188 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:37.188 Verification LBA range: start 0x0 length 0x1000 00:09:37.188 Nvme1n1 : 10.01 6293.22 49.17 0.00 0.00 20276.02 3232.12 32172.22 00:09:37.188 [2024-11-20T09:03:16.107Z] =================================================================================================================== 00:09:37.188 [2024-11-20T09:03:16.107Z] Total : 6293.22 49.17 0.00 0.00 20276.02 3232.12 32172.22 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=70890 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:37.448 { 00:09:37.448 "params": { 00:09:37.448 "name": "Nvme$subsystem", 00:09:37.448 "trtype": "$TEST_TRANSPORT", 00:09:37.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.448 "adrfam": "ipv4", 00:09:37.448 "trsvcid": "$NVMF_PORT", 00:09:37.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.448 "hdgst": ${hdgst:-false}, 00:09:37.448 "ddgst": ${ddgst:-false} 00:09:37.448 }, 00:09:37.448 "method": "bdev_nvme_attach_controller" 00:09:37.448 } 00:09:37.448 EOF 00:09:37.448 )") 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:09:37.448 [2024-11-20 09:03:16.278949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.448 [2024-11-20 09:03:16.278991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:09:37.448 09:03:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:37.448 "params": { 00:09:37.448 "name": "Nvme1", 00:09:37.448 "trtype": "tcp", 00:09:37.448 "traddr": "10.0.0.2", 00:09:37.448 "adrfam": "ipv4", 00:09:37.448 "trsvcid": "4420", 00:09:37.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.448 "hdgst": false, 00:09:37.448 "ddgst": false 00:09:37.448 }, 00:09:37.448 "method": "bdev_nvme_attach_controller" 00:09:37.448 }' 00:09:37.448 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.448 [2024-11-20 09:03:16.290903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.448 [2024-11-20 09:03:16.290934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.448 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.448 [2024-11-20 09:03:16.302923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.448 [2024-11-20 09:03:16.302950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.448 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.448 [2024-11-20 09:03:16.314903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.448 [2024-11-20 09:03:16.314929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.448 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.448 [2024-11-20 09:03:16.326919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.448 [2024-11-20 09:03:16.326946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.448 [2024-11-20 09:03:16.329697] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:37.448 [2024-11-20 09:03:16.329823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70890 ] 00:09:37.448 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.448 [2024-11-20 09:03:16.338910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.448 [2024-11-20 09:03:16.338936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.448 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.448 [2024-11-20 09:03:16.350926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.448 [2024-11-20 09:03:16.350952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.449 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.449 [2024-11-20 09:03:16.362912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.449 [2024-11-20 09:03:16.362940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.374916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.374942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.386944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.386972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.398944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.398971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.410942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.410972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.422947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.422990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.434951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.434993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.446977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.447021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.708 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.708 [2024-11-20 09:03:16.458967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.708 [2024-11-20 09:03:16.458993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.470972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.470998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 [2024-11-20 09:03:16.473121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.482987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.483017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.495031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.495061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.507031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.507077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.519000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.519026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.530998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.531041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 [2024-11-20 09:03:16.533889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.542995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.543020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.555011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.555043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.567030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.567092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.579030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.579082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.591026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.591076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.603020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.603064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.709 [2024-11-20 09:03:16.615035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.709 [2024-11-20 09:03:16.615100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.709 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.968 [2024-11-20 09:03:16.627027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.968 [2024-11-20 09:03:16.627072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.968 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.968 [2024-11-20 09:03:16.639022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.968 [2024-11-20 09:03:16.639064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.968 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.968 [2024-11-20 09:03:16.651095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.968 [2024-11-20 09:03:16.651143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.663043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.663072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.675138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.675168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.687067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.687096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.699049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.699079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.711086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.711135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 Running I/O for 5 seconds... 00:09:37.969 [2024-11-20 09:03:16.723062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.723091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.739779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.739856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.749998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.750037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.765158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.765206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.777086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.777133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.794245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.794325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.809624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.809671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.819634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.819682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.833404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.833451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.850087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.850122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.865798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.865845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:37.969 [2024-11-20 09:03:16.877357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.969 [2024-11-20 09:03:16.877389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.969 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:16.893205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:16.893255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:16.910283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:16.910348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:16.926464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:16.926511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:16.936642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:16.936689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:16.950650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:16.950696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:16.966988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:16.967036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:16.983310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:16.983358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.000595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.000642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.015495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.015542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.029876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.029946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.046359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.046406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.062051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.062087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.071397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.071444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.086301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.086349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.101554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.101601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.111408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.111454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.125348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.125395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.229 [2024-11-20 09:03:17.141089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.229 [2024-11-20 09:03:17.141123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.229 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.488 [2024-11-20 09:03:17.152788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.488 [2024-11-20 09:03:17.152827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.488 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.488 [2024-11-20 09:03:17.169210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.488 [2024-11-20 09:03:17.169258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.488 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.488 [2024-11-20 09:03:17.185191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.488 [2024-11-20 09:03:17.185238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.488 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.488 [2024-11-20 09:03:17.197063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.488 [2024-11-20 09:03:17.197110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.488 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.488 [2024-11-20 09:03:17.214014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.488 [2024-11-20 09:03:17.214049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.488 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.488 [2024-11-20 09:03:17.230511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.488 [2024-11-20 09:03:17.230558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.488 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.488 [2024-11-20 09:03:17.248728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.248801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.263704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.263766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.279939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.279988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.295965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.296013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.312174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.312222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.323730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.323778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.340457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.340506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.355075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.355124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.371750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.371811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.387064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.387110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.489 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.489 [2024-11-20 09:03:17.404227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.489 [2024-11-20 09:03:17.404276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.419469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.419520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.435509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.435556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.445763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.445801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.459285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.459331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.474866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.474913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.484847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.484879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.499537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.499586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.516435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.516482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.532162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.532196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.549013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.549062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.565386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.565434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.583310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.583360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.597697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.597744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.613172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.613220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.629590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.629638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.646015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.646050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.749 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:38.749 [2024-11-20 09:03:17.664375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.749 [2024-11-20 09:03:17.664439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.678705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.678753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.695735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.695793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.711707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.711782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 12255.00 IOPS, 95.74 MiB/s [2024-11-20T09:03:17.927Z] [2024-11-20 09:03:17.729220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.729268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.744929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.744975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.757372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.757419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.774309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.774357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.789758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.789817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.008 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.008 [2024-11-20 09:03:17.801903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.008 [2024-11-20 09:03:17.801966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.009 [2024-11-20 09:03:17.818313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.009 [2024-11-20 09:03:17.818363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.009 [2024-11-20 09:03:17.834801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.009 [2024-11-20 09:03:17.834859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.009 [2024-11-20 09:03:17.851726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.009 [2024-11-20 09:03:17.851798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.009 [2024-11-20 09:03:17.867442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.009 [2024-11-20 09:03:17.867492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.009 [2024-11-20 09:03:17.878927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.009 [2024-11-20 09:03:17.878973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.009 [2024-11-20 09:03:17.894732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.009 [2024-11-20 09:03:17.894791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.009 [2024-11-20 09:03:17.910310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.009 [2024-11-20 09:03:17.910359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.009 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.267 [2024-11-20 09:03:17.926873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:17.926903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:17.942909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:17.942956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:17.959840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:17.959888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:17.975587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:17.975636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:17.992856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:17.992888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.008937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.008984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.026894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.026926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.041559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.041606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.056946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.056992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.068337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.068384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.085039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.085072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.101262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.101309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.119264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.119312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.135468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.135515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.152553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.152602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.268 [2024-11-20 09:03:18.169224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.268 [2024-11-20 09:03:18.169290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.268 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.527 [2024-11-20 09:03:18.185929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.527 [2024-11-20 09:03:18.185964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.527 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.527 [2024-11-20 09:03:18.202204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.527 [2024-11-20 09:03:18.202268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.527 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.527 [2024-11-20 09:03:18.213561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.527 [2024-11-20 09:03:18.213607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.527 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.527 [2024-11-20 09:03:18.230092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.527 [2024-11-20 09:03:18.230126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.527 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.527 [2024-11-20 09:03:18.247300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.527 [2024-11-20 09:03:18.247349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.527 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.263215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.263263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.280425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.280476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.295285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.295332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.312980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.313028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.327794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.327854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.343733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.343798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.358936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.358967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.374736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.374795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.390874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.390921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.408227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.408274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.424045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.424109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.528 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.528 [2024-11-20 09:03:18.440591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.528 [2024-11-20 09:03:18.440654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.455994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.456041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.471833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.471879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.489085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.489134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.503711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.503787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.520536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.520585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.535371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.535419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.551193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.551239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.568312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.568359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.585314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.585362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.601712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.601786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.618584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.618616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.635510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.635559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.650879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.650925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.667059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.667123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.683675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.683721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.787 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:39.787 [2024-11-20 09:03:18.701092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.787 [2024-11-20 09:03:18.701140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.046 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.046 [2024-11-20 09:03:18.716047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.046 [2024-11-20 09:03:18.716094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.046 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.046 12314.00 IOPS, 96.20 MiB/s [2024-11-20T09:03:18.965Z] [2024-11-20 09:03:18.732753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.046 [2024-11-20 09:03:18.732810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.046 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.046 [2024-11-20 09:03:18.749866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.046 [2024-11-20 09:03:18.749936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.046 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.046 [2024-11-20 09:03:18.767043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.046 [2024-11-20 09:03:18.767089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.046 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.046 [2024-11-20 09:03:18.783681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.046 [2024-11-20 09:03:18.783729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.799379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.799426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.815844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.815890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.831899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.831946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.842013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.842045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.855735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.855793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.871266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.871313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.883547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.883595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.900342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.900392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.915479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.915527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.930757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.930816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.947604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.947653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.047 [2024-11-20 09:03:18.958499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.047 [2024-11-20 09:03:18.958529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.047 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.306 [2024-11-20 09:03:18.973613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.306 [2024-11-20 09:03:18.973660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.306 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.306 [2024-11-20 09:03:18.989955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.306 [2024-11-20 09:03:18.989987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.306 2024/11/20 09:03:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.306 [2024-11-20 09:03:19.005242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.306 [2024-11-20 09:03:19.005290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.020848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.020878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.038848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.038895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.052814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.052843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.069739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.069797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.084630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.084678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.100175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.100222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.116181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.116229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.127984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.128013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.143902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.143948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.160761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.160819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.175066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.175098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.188754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.188813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.205494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.205540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.307 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.307 [2024-11-20 09:03:19.220261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.307 [2024-11-20 09:03:19.220308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.235857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.235904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.250514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.250561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.265801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.265847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.275816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.275862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.289480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.289531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.305848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.305917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.321323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.321372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.336573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.336621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.346639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.346687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.361233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.361284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.378571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.378621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.395537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.395586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.410619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.410687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.425610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.425659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.440799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.440846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.458455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.458501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.566 [2024-11-20 09:03:19.474050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.566 [2024-11-20 09:03:19.474083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.566 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.491581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.491631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.506656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.506691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.523448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.523498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.539508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.539542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.556093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.556141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.572592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.572639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.590450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.590496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.604446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.604492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.620991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.621037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.637177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.637224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.654162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.654195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.670804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.670879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.686146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.686180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.698972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.699019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 [2024-11-20 09:03:19.715023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.715071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.825 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:40.825 12341.00 IOPS, 96.41 MiB/s [2024-11-20T09:03:19.744Z] [2024-11-20 09:03:19.726734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.825 [2024-11-20 09:03:19.726796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.826 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.084 [2024-11-20 09:03:19.744246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.084 [2024-11-20 09:03:19.744299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.084 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.084 [2024-11-20 09:03:19.758429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.084 [2024-11-20 09:03:19.758491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.084 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.084 [2024-11-20 09:03:19.774029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.084 [2024-11-20 09:03:19.774062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.084 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.084 [2024-11-20 09:03:19.791076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.084 [2024-11-20 09:03:19.791123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.084 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.808273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.808306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.824330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.824379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.836071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.836133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.852591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.852638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.868628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.868676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.885552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.885599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.901120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.901167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.916769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.916815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.934550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.934597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.949213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.949260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.965928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.965977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.980997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.981045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.085 [2024-11-20 09:03:19.991778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.085 [2024-11-20 09:03:19.991836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.085 2024/11/20 09:03:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.006982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.007015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.023760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.023820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.040334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.040381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.056126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.056174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.072478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.072526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.083812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.083858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.100206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.100254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.115068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.115101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.131891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.131939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.148268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.148316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.164051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.164082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.174026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.174060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.188237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.188283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.205791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.205837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.220497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.220543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.235848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.235894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.344 [2024-11-20 09:03:20.247087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.344 [2024-11-20 09:03:20.247119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.344 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.264158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.264207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.278837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.278896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.295020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.295067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.310438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.310486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.327922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.327969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.342439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.342501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.359232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.359281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.375010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.375057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.390806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.390865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.603 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.603 [2024-11-20 09:03:20.407259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.603 [2024-11-20 09:03:20.407309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.604 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.604 [2024-11-20 09:03:20.423972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.604 [2024-11-20 09:03:20.424037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.604 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.604 [2024-11-20 09:03:20.439886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.604 [2024-11-20 09:03:20.439933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.604 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.604 [2024-11-20 09:03:20.456930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.604 [2024-11-20 09:03:20.456995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.604 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.604 [2024-11-20 09:03:20.473433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.604 [2024-11-20 09:03:20.473481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.604 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.604 [2024-11-20 09:03:20.490526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.604 [2024-11-20 09:03:20.490572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.604 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.604 [2024-11-20 09:03:20.505504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.604 [2024-11-20 09:03:20.505551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.604 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.521815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.521862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.538628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.538675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.555461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.555492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.572788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.572865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.588723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.588769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.604743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.604789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.622673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.622708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.639277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.639315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.655548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.655626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.671204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.671253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.687090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.687138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.703501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.703548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.720451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.720500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 12311.50 IOPS, 96.18 MiB/s [2024-11-20T09:03:20.782Z] 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.735284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.735331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.750282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.750330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:41.863 [2024-11-20 09:03:20.766174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.863 [2024-11-20 09:03:20.766208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.863 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.783389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.783425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.798468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.798517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.815018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.815050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.831242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.831289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.848001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.848033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.865394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.865429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.881363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.881412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.898073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.898106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.915390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.915440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.930796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.930853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.942063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.942099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.958928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.958973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.975961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.975993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:20.992723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:20.992795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:21.010495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:21.010542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:21.021416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:21.021478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.123 [2024-11-20 09:03:21.036213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.123 [2024-11-20 09:03:21.036261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.123 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.045942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.045976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.060594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.060639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.076042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.076087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.093217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.093263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.109608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.109655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.126281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.126328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.143716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.143787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.158472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.158517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.382 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.382 [2024-11-20 09:03:21.175569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.382 [2024-11-20 09:03:21.175616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.383 [2024-11-20 09:03:21.190562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.383 [2024-11-20 09:03:21.190609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.383 [2024-11-20 09:03:21.206347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.383 [2024-11-20 09:03:21.206394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.383 [2024-11-20 09:03:21.224027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.383 [2024-11-20 09:03:21.224073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.383 [2024-11-20 09:03:21.239472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.383 [2024-11-20 09:03:21.239520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.383 [2024-11-20 09:03:21.250516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.383 [2024-11-20 09:03:21.250578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.383 [2024-11-20 09:03:21.267049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.383 [2024-11-20 09:03:21.267081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.383 [2024-11-20 09:03:21.284169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.383 [2024-11-20 09:03:21.284216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.383 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.299991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.300025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.309877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.309950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.324426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.324472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.342039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.342072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.356813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.356859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.372633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.372681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.388303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.388350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.402592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.402639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.418880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.418927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.435144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.435191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.451386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.451434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.667 [2024-11-20 09:03:21.468750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.667 [2024-11-20 09:03:21.468807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.667 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.668 [2024-11-20 09:03:21.484062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.668 [2024-11-20 09:03:21.484094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.668 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.668 [2024-11-20 09:03:21.494619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.668 [2024-11-20 09:03:21.494671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.668 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.668 [2024-11-20 09:03:21.509106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.668 [2024-11-20 09:03:21.509141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.668 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.668 [2024-11-20 09:03:21.525933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.668 [2024-11-20 09:03:21.525968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.668 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.668 [2024-11-20 09:03:21.541533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.668 [2024-11-20 09:03:21.541582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.668 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.668 [2024-11-20 09:03:21.558744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.668 [2024-11-20 09:03:21.558807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.668 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.668 [2024-11-20 09:03:21.573734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.668 [2024-11-20 09:03:21.573797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.668 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.589218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.589251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.599931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.599965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.614470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.614516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.624840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.624869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.639639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.639685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.651019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.651051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.667521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.667569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.682553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.682598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.698954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.698985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.714300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.714348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 12331.40 IOPS, 96.34 MiB/s [2024-11-20T09:03:21.855Z] [2024-11-20 09:03:21.724169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.724216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 00:09:42.936 Latency(us) 00:09:42.936 [2024-11-20T09:03:21.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.936 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:42.936 Nvme1n1 : 5.01 12334.53 96.36 0.00 0.00 10365.11 4557.73 17992.61 00:09:42.936 [2024-11-20T09:03:21.855Z] =================================================================================================================== 00:09:42.936 [2024-11-20T09:03:21.855Z] Total : 12334.53 96.36 0.00 0.00 10365.11 4557.73 17992.61 00:09:42.936 [2024-11-20 09:03:21.734951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.734998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.746948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.746993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.758966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.758998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.770951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.771017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.782970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.783066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.794965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.795019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.806982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.807020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.936 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.936 [2024-11-20 09:03:21.819029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.936 [2024-11-20 09:03:21.819111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.937 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.937 [2024-11-20 09:03:21.830982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.937 [2024-11-20 09:03:21.831035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.937 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:42.937 [2024-11-20 09:03:21.842998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.937 [2024-11-20 09:03:21.843049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.937 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.854984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.855035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.866998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.867049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.878968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.878999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.890962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.890991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.903004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.903041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.915040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.915091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.926986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.927013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 [2024-11-20 09:03:21.938974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.196 [2024-11-20 09:03:21.939003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.196 2024/11/20 09:03:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:43.196 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (70890) - No such process 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 70890 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.196 delay0 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.196 09:03:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:43.455 [2024-11-20 09:03:22.150179] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:50.031 Initializing NVMe Controllers 00:09:50.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:50.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:50.031 Initialization complete. Launching workers. 00:09:50.031 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 67 00:09:50.031 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 354, failed to submit 33 00:09:50.031 success 153, unsuccessful 201, failed 0 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:50.031 rmmod nvme_tcp 00:09:50.031 rmmod nvme_fabrics 00:09:50.031 rmmod nvme_keyring 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 70735 ']' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 70735 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 70735 ']' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 70735 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70735 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:50.031 killing process with pid 70735 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70735' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 70735 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 70735 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:09:50.031 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:09:50.032 00:09:50.032 real 0m24.345s 00:09:50.032 user 0m39.750s 00:09:50.032 sys 0m6.517s 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 ************************************ 00:09:50.032 END TEST nvmf_zcopy 00:09:50.032 ************************************ 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # trap - SIGINT SIGTERM EXIT 00:09:50.032 00:09:50.032 real 3m42.370s 00:09:50.032 user 11m53.528s 00:09:50.032 sys 1m4.247s 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.032 ************************************ 00:09:50.032 END TEST nvmf_target_core 00:09:50.032 ************************************ 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 09:03:28 nvmf_tcp -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:50.032 09:03:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.032 09:03:28 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.032 09:03:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:50.032 ************************************ 00:09:50.032 START TEST nvmf_target_extra 00:09:50.032 ************************************ 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:50.032 * Looking for test storage... 00:09:50.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.032 09:03:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:50.292 09:03:28 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.292 --rc genhtml_branch_coverage=1 00:09:50.292 --rc genhtml_function_coverage=1 00:09:50.292 --rc genhtml_legend=1 00:09:50.292 --rc geninfo_all_blocks=1 00:09:50.292 --rc geninfo_unexecuted_blocks=1 00:09:50.292 00:09:50.292 ' 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.292 --rc genhtml_branch_coverage=1 00:09:50.292 --rc genhtml_function_coverage=1 00:09:50.292 --rc genhtml_legend=1 00:09:50.292 --rc geninfo_all_blocks=1 00:09:50.292 --rc geninfo_unexecuted_blocks=1 00:09:50.292 00:09:50.292 ' 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.292 --rc genhtml_branch_coverage=1 00:09:50.292 --rc genhtml_function_coverage=1 00:09:50.292 --rc genhtml_legend=1 00:09:50.292 --rc geninfo_all_blocks=1 00:09:50.292 --rc geninfo_unexecuted_blocks=1 00:09:50.292 00:09:50.292 ' 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.292 --rc genhtml_branch_coverage=1 00:09:50.292 --rc genhtml_function_coverage=1 00:09:50.292 --rc genhtml_legend=1 00:09:50.292 --rc geninfo_all_blocks=1 00:09:50.292 --rc geninfo_unexecuted_blocks=1 00:09:50.292 00:09:50.292 ' 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.292 09:03:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:50.293 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:50.293 ************************************ 00:09:50.293 START TEST nvmf_example 00:09:50.293 ************************************ 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:50.293 * Looking for test storage... 00:09:50.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.293 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.551 --rc genhtml_branch_coverage=1 00:09:50.551 --rc genhtml_function_coverage=1 00:09:50.551 --rc genhtml_legend=1 00:09:50.551 --rc geninfo_all_blocks=1 00:09:50.551 --rc geninfo_unexecuted_blocks=1 00:09:50.551 00:09:50.551 ' 00:09:50.551 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.552 --rc genhtml_branch_coverage=1 00:09:50.552 --rc genhtml_function_coverage=1 00:09:50.552 --rc genhtml_legend=1 00:09:50.552 --rc geninfo_all_blocks=1 00:09:50.552 --rc geninfo_unexecuted_blocks=1 00:09:50.552 00:09:50.552 ' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.552 --rc genhtml_branch_coverage=1 00:09:50.552 --rc genhtml_function_coverage=1 00:09:50.552 --rc genhtml_legend=1 00:09:50.552 --rc geninfo_all_blocks=1 00:09:50.552 --rc geninfo_unexecuted_blocks=1 00:09:50.552 00:09:50.552 ' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.552 --rc genhtml_branch_coverage=1 00:09:50.552 --rc genhtml_function_coverage=1 00:09:50.552 --rc genhtml_legend=1 00:09:50.552 --rc geninfo_all_blocks=1 00:09:50.552 --rc geninfo_unexecuted_blocks=1 00:09:50.552 00:09:50.552 ' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:50.552 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@280 -- # nvmf_veth_init 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@223 -- # create_target_ns 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # create_main_bridge 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@105 -- # delete_main_bridge 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # return 0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up initiator0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up target0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target0 up 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up target0_br 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns target0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:09:50.552 10.0.0.1 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:09:50.552 10.0.0.2 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up initiator0 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.552 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up target0_br 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:09:50.553 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up initiator1 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@151 -- # set_up target1 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target1 up 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # set_up target1_br 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns target1 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772163 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:09:50.813 10.0.0.3 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772164 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:09:50.813 10.0.0.4 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up initiator1 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:09:50.813 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@129 -- # set_up target1_br 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 2 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:50.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:09:50.814 00:09:50.814 --- 10.0.0.1 ping statistics --- 00:09:50.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.814 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target0 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:09:50.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:09:50.814 00:09:50.814 --- 10.0.0.2 ping statistics --- 00:09:50.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.814 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:50.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:09:50.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:50.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:09:50.815 00:09:50.815 --- 10.0.0.3 ping statistics --- 00:09:50.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.815 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:09:50.815 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:50.815 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:09:50.815 00:09:50.815 --- 10.0.0.4 ping statistics --- 00:09:50.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.815 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # return 0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo initiator1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=initiator1 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target0 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:50.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo target1 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=target1 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=71329 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 71329 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 71329 ']' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.075 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.011 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.011 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:52.011 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:52.011 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.011 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.270 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.270 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.270 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.270 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.270 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:52.270 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.270 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:09:52.270 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:04.482 Initializing NVMe Controllers 00:10:04.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:04.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:04.482 Initialization complete. Launching workers. 00:10:04.482 ======================================================== 00:10:04.482 Latency(us) 00:10:04.482 Device Information : IOPS MiB/s Average min max 00:10:04.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14907.99 58.23 4292.71 727.94 25099.91 00:10:04.482 ======================================================== 00:10:04.482 Total : 14907.99 58.23 4292.71 727.94 25099.91 00:10:04.482 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:04.482 rmmod nvme_tcp 00:10:04.482 rmmod nvme_fabrics 00:10:04.482 rmmod nvme_keyring 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 71329 ']' 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 71329 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 71329 ']' 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 71329 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71329 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71329' 00:10:04.482 killing process with pid 71329 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 71329 00:10:04.482 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 71329 00:10:04.483 nvmf threads initialize successfully 00:10:04.483 bdev subsystem init successfully 00:10:04.483 created a nvmf target service 00:10:04.483 create targets's poll groups done 00:10:04.483 all subsystems of target started 00:10:04.483 nvmf target is running 00:10:04.483 all subsystems of target stopped 00:10:04.483 destroy targets's poll groups done 00:10:04.483 destroyed the nvmf target service 00:10:04.483 bdev subsystem finish successfully 00:10:04.483 nvmf threads destroy successfully 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@254 -- # local dev 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # continue 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # continue 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@274 -- # iptr 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-save 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-restore 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.483 ************************************ 00:10:04.483 END TEST nvmf_example 00:10:04.483 ************************************ 00:10:04.483 00:10:04.483 real 0m12.821s 00:10:04.483 user 0m45.093s 00:10:04.483 sys 0m2.202s 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.483 ************************************ 00:10:04.483 START TEST nvmf_filesystem 00:10:04.483 ************************************ 00:10:04.483 09:03:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:04.483 * Looking for test storage... 00:10:04.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.483 --rc genhtml_branch_coverage=1 00:10:04.483 --rc genhtml_function_coverage=1 00:10:04.483 --rc genhtml_legend=1 00:10:04.483 --rc geninfo_all_blocks=1 00:10:04.483 --rc geninfo_unexecuted_blocks=1 00:10:04.483 00:10:04.483 ' 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.483 --rc genhtml_branch_coverage=1 00:10:04.483 --rc genhtml_function_coverage=1 00:10:04.483 --rc genhtml_legend=1 00:10:04.483 --rc geninfo_all_blocks=1 00:10:04.483 --rc geninfo_unexecuted_blocks=1 00:10:04.483 00:10:04.483 ' 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.483 --rc genhtml_branch_coverage=1 00:10:04.483 --rc genhtml_function_coverage=1 00:10:04.483 --rc genhtml_legend=1 00:10:04.483 --rc geninfo_all_blocks=1 00:10:04.483 --rc geninfo_unexecuted_blocks=1 00:10:04.483 00:10:04.483 ' 00:10:04.483 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.484 --rc genhtml_branch_coverage=1 00:10:04.484 --rc genhtml_function_coverage=1 00:10:04.484 --rc genhtml_legend=1 00:10:04.484 --rc geninfo_all_blocks=1 00:10:04.484 --rc geninfo_unexecuted_blocks=1 00:10:04.484 00:10:04.484 ' 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:04.484 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:04.485 #define SPDK_CONFIG_H 00:10:04.485 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:04.485 #define SPDK_CONFIG_APPS 1 00:10:04.485 #define SPDK_CONFIG_ARCH native 00:10:04.485 #undef SPDK_CONFIG_ASAN 00:10:04.485 #define SPDK_CONFIG_AVAHI 1 00:10:04.485 #undef SPDK_CONFIG_CET 00:10:04.485 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:04.485 #define SPDK_CONFIG_COVERAGE 1 00:10:04.485 #define SPDK_CONFIG_CROSS_PREFIX 00:10:04.485 #undef SPDK_CONFIG_CRYPTO 00:10:04.485 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:04.485 #undef SPDK_CONFIG_CUSTOMOCF 00:10:04.485 #undef SPDK_CONFIG_DAOS 00:10:04.485 #define SPDK_CONFIG_DAOS_DIR 00:10:04.485 #define SPDK_CONFIG_DEBUG 1 00:10:04.485 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:04.485 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:04.485 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:04.485 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:04.485 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:04.485 #undef SPDK_CONFIG_DPDK_UADK 00:10:04.485 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:04.485 #define SPDK_CONFIG_EXAMPLES 1 00:10:04.485 #undef SPDK_CONFIG_FC 00:10:04.485 #define SPDK_CONFIG_FC_PATH 00:10:04.485 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:04.485 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:04.485 #define SPDK_CONFIG_FSDEV 1 00:10:04.485 #undef SPDK_CONFIG_FUSE 00:10:04.485 #undef SPDK_CONFIG_FUZZER 00:10:04.485 #define SPDK_CONFIG_FUZZER_LIB 00:10:04.485 #define SPDK_CONFIG_GOLANG 1 00:10:04.485 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:04.485 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:04.485 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:04.485 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:04.485 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:04.485 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:04.485 #undef SPDK_CONFIG_HAVE_LZ4 00:10:04.485 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:04.485 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:04.485 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:04.485 #define SPDK_CONFIG_IDXD 1 00:10:04.485 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:04.485 #undef SPDK_CONFIG_IPSEC_MB 00:10:04.485 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:04.485 #define SPDK_CONFIG_ISAL 1 00:10:04.485 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:04.485 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:04.485 #define SPDK_CONFIG_LIBDIR 00:10:04.485 #undef SPDK_CONFIG_LTO 00:10:04.485 #define SPDK_CONFIG_MAX_LCORES 128 00:10:04.485 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:04.485 #define SPDK_CONFIG_NVME_CUSE 1 00:10:04.485 #undef SPDK_CONFIG_OCF 00:10:04.485 #define SPDK_CONFIG_OCF_PATH 00:10:04.485 #define SPDK_CONFIG_OPENSSL_PATH 00:10:04.485 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:04.485 #define SPDK_CONFIG_PGO_DIR 00:10:04.485 #undef SPDK_CONFIG_PGO_USE 00:10:04.485 #define SPDK_CONFIG_PREFIX /usr/local 00:10:04.485 #undef SPDK_CONFIG_RAID5F 00:10:04.485 #undef SPDK_CONFIG_RBD 00:10:04.485 #define SPDK_CONFIG_RDMA 1 00:10:04.485 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:04.485 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:04.485 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:04.485 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:04.485 #define SPDK_CONFIG_SHARED 1 00:10:04.485 #undef SPDK_CONFIG_SMA 00:10:04.485 #define SPDK_CONFIG_TESTS 1 00:10:04.485 #undef SPDK_CONFIG_TSAN 00:10:04.485 #define SPDK_CONFIG_UBLK 1 00:10:04.485 #define SPDK_CONFIG_UBSAN 1 00:10:04.485 #undef SPDK_CONFIG_UNIT_TESTS 00:10:04.485 #undef SPDK_CONFIG_URING 00:10:04.485 #define SPDK_CONFIG_URING_PATH 00:10:04.485 #undef SPDK_CONFIG_URING_ZNS 00:10:04.485 #define SPDK_CONFIG_USDT 1 00:10:04.485 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:04.485 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:04.485 #undef SPDK_CONFIG_VFIO_USER 00:10:04.485 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:04.485 #define SPDK_CONFIG_VHOST 1 00:10:04.485 #define SPDK_CONFIG_VIRTIO 1 00:10:04.485 #undef SPDK_CONFIG_VTUNE 00:10:04.485 #define SPDK_CONFIG_VTUNE_DIR 00:10:04.485 #define SPDK_CONFIG_WERROR 1 00:10:04.485 #define SPDK_CONFIG_WPDK_DIR 00:10:04.485 #undef SPDK_CONFIG_XNVME 00:10:04.485 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:04.485 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:04.486 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:04.487 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71586 ]] 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71586 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.xbu0uu 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.xbu0uu/tests/target /tmp/spdk.xbu0uu 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979328512 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589868544 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6259572736 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6856704 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979328512 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589868544 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266294272 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=135168 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora39-libvirt/output 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=94101159936 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5601619968 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:04.488 * Looking for test storage... 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13979328512 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:04.488 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.489 --rc genhtml_branch_coverage=1 00:10:04.489 --rc genhtml_function_coverage=1 00:10:04.489 --rc genhtml_legend=1 00:10:04.489 --rc geninfo_all_blocks=1 00:10:04.489 --rc geninfo_unexecuted_blocks=1 00:10:04.489 00:10:04.489 ' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.489 --rc genhtml_branch_coverage=1 00:10:04.489 --rc genhtml_function_coverage=1 00:10:04.489 --rc genhtml_legend=1 00:10:04.489 --rc geninfo_all_blocks=1 00:10:04.489 --rc geninfo_unexecuted_blocks=1 00:10:04.489 00:10:04.489 ' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.489 --rc genhtml_branch_coverage=1 00:10:04.489 --rc genhtml_function_coverage=1 00:10:04.489 --rc genhtml_legend=1 00:10:04.489 --rc geninfo_all_blocks=1 00:10:04.489 --rc geninfo_unexecuted_blocks=1 00:10:04.489 00:10:04.489 ' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.489 --rc genhtml_branch_coverage=1 00:10:04.489 --rc genhtml_function_coverage=1 00:10:04.489 --rc genhtml_legend=1 00:10:04.489 --rc geninfo_all_blocks=1 00:10:04.489 --rc geninfo_unexecuted_blocks=1 00:10:04.489 00:10:04.489 ' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.489 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:04.490 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@280 -- # nvmf_veth_init 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@223 -- # create_target_ns 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # create_main_bridge 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@105 -- # delete_main_bridge 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # return 0 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up initiator0 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up target0 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target0 up 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up target0_br 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:04.490 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns target0 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:10:04.491 10.0.0.1 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:10:04.491 10.0.0.2 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up initiator0 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up target0_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up initiator1 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@151 -- # set_up target1 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.491 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target1 up 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # set_up target1_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns target1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772163 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:10:04.492 10.0.0.3 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772164 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:10:04.492 10.0.0.4 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up initiator1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@129 -- # set_up target1_br 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 2 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator0 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:04.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:04.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:10:04.492 00:10:04.492 --- 10.0.0.1 ping statistics --- 00:10:04.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.492 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:04.492 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:04.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:04.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:10:04.493 00:10:04.493 --- 10.0.0.2 ping statistics --- 00:10:04.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.493 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:10:04.493 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:04.493 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:10:04.493 00:10:04.493 --- 10.0.0.3 ping statistics --- 00:10:04.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.493 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:10:04.493 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:04.493 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:10:04.493 00:10:04.493 --- 10.0.0.4 ping statistics --- 00:10:04.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:04.493 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # return 0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:04.493 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target0 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target0 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo target1 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=target1 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.494 ************************************ 00:10:04.494 START TEST nvmf_filesystem_no_in_capsule 00:10:04.494 ************************************ 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=71787 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 71787 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71787 ']' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:04.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:04.494 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.494 [2024-11-20 09:03:42.907898] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:04.494 [2024-11-20 09:03:42.908540] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:04.494 [2024-11-20 09:03:43.059188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:04.494 [2024-11-20 09:03:43.132143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:04.494 [2024-11-20 09:03:43.132212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:04.494 [2024-11-20 09:03:43.132226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:04.494 [2024-11-20 09:03:43.132237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:04.494 [2024-11-20 09:03:43.132246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:04.494 [2024-11-20 09:03:43.133623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.494 [2024-11-20 09:03:43.133804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:04.494 [2024-11-20 09:03:43.133957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:04.494 [2024-11-20 09:03:43.133959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.494 [2024-11-20 09:03:43.319189] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.494 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.754 Malloc1 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.754 [2024-11-20 09:03:43.501443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:04.754 { 00:10:04.754 "aliases": [ 00:10:04.754 "3920dcdd-26b9-4a65-be66-f66eaa852762" 00:10:04.754 ], 00:10:04.754 "assigned_rate_limits": { 00:10:04.754 "r_mbytes_per_sec": 0, 00:10:04.754 "rw_ios_per_sec": 0, 00:10:04.754 "rw_mbytes_per_sec": 0, 00:10:04.754 "w_mbytes_per_sec": 0 00:10:04.754 }, 00:10:04.754 "block_size": 512, 00:10:04.754 "claim_type": "exclusive_write", 00:10:04.754 "claimed": true, 00:10:04.754 "driver_specific": {}, 00:10:04.754 "memory_domains": [ 00:10:04.754 { 00:10:04.754 "dma_device_id": "system", 00:10:04.754 "dma_device_type": 1 00:10:04.754 }, 00:10:04.754 { 00:10:04.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.754 "dma_device_type": 2 00:10:04.754 } 00:10:04.754 ], 00:10:04.754 "name": "Malloc1", 00:10:04.754 "num_blocks": 1048576, 00:10:04.754 "product_name": "Malloc disk", 00:10:04.754 "supported_io_types": { 00:10:04.754 "abort": true, 00:10:04.754 "compare": false, 00:10:04.754 "compare_and_write": false, 00:10:04.754 "copy": true, 00:10:04.754 "flush": true, 00:10:04.754 "get_zone_info": false, 00:10:04.754 "nvme_admin": false, 00:10:04.754 "nvme_io": false, 00:10:04.754 "nvme_io_md": false, 00:10:04.754 "nvme_iov_md": false, 00:10:04.754 "read": true, 00:10:04.754 "reset": true, 00:10:04.754 "seek_data": false, 00:10:04.754 "seek_hole": false, 00:10:04.754 "unmap": true, 00:10:04.754 "write": true, 00:10:04.754 "write_zeroes": true, 00:10:04.754 "zcopy": true, 00:10:04.754 "zone_append": false, 00:10:04.754 "zone_management": false 00:10:04.754 }, 00:10:04.754 "uuid": "3920dcdd-26b9-4a65-be66-f66eaa852762", 00:10:04.754 "zoned": false 00:10:04.754 } 00:10:04.754 ]' 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:04.754 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:04.755 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:04.755 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:04.755 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:04.755 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:04.755 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:04.755 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.013 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:05.013 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.013 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.013 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:05.013 09:03:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:06.917 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:06.917 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:06.917 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:07.176 09:03:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:08.112 ************************************ 00:10:08.112 START TEST filesystem_ext4 00:10:08.112 ************************************ 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:08.112 09:03:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:08.112 mke2fs 1.47.0 (5-Feb-2023) 00:10:08.370 Discarding device blocks: 0/522240 done 00:10:08.370 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:08.370 Filesystem UUID: 6e9c6843-e6fb-4132-a80f-431ef541bd15 00:10:08.370 Superblock backups stored on blocks: 00:10:08.370 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:08.370 00:10:08.370 Allocating group tables: 0/64 done 00:10:08.370 Writing inode tables: 0/64 done 00:10:08.370 Creating journal (8192 blocks): done 00:10:08.370 Writing superblocks and filesystem accounting information: 0/64 done 00:10:08.370 00:10:08.370 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:08.370 09:03:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:13.640 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:13.640 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71787 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:13.899 ************************************ 00:10:13.899 END TEST filesystem_ext4 00:10:13.899 ************************************ 00:10:13.899 00:10:13.899 real 0m5.642s 00:10:13.899 user 0m0.023s 00:10:13.899 sys 0m0.067s 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.899 ************************************ 00:10:13.899 START TEST filesystem_btrfs 00:10:13.899 ************************************ 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:13.899 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:14.158 btrfs-progs v6.8.1 00:10:14.158 See https://btrfs.readthedocs.io for more information. 00:10:14.158 00:10:14.158 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:14.158 NOTE: several default settings have changed in version 5.15, please make sure 00:10:14.158 this does not affect your deployments: 00:10:14.158 - DUP for metadata (-m dup) 00:10:14.158 - enabled no-holes (-O no-holes) 00:10:14.158 - enabled free-space-tree (-R free-space-tree) 00:10:14.158 00:10:14.158 Label: (null) 00:10:14.158 UUID: ad4a6f0a-cfba-4ee0-9109-5ccb74b2b8d7 00:10:14.158 Node size: 16384 00:10:14.158 Sector size: 4096 (CPU page size: 4096) 00:10:14.158 Filesystem size: 510.00MiB 00:10:14.158 Block group profiles: 00:10:14.158 Data: single 8.00MiB 00:10:14.158 Metadata: DUP 32.00MiB 00:10:14.158 System: DUP 8.00MiB 00:10:14.158 SSD detected: yes 00:10:14.158 Zoned device: no 00:10:14.158 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:14.158 Checksum: crc32c 00:10:14.158 Number of devices: 1 00:10:14.158 Devices: 00:10:14.158 ID SIZE PATH 00:10:14.158 1 510.00MiB /dev/nvme0n1p1 00:10:14.158 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:14.158 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71787 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:14.159 ************************************ 00:10:14.159 END TEST filesystem_btrfs 00:10:14.159 ************************************ 00:10:14.159 00:10:14.159 real 0m0.235s 00:10:14.159 user 0m0.015s 00:10:14.159 sys 0m0.066s 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.159 ************************************ 00:10:14.159 START TEST filesystem_xfs 00:10:14.159 ************************************ 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:14.159 09:03:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:14.159 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:14.159 = sectsz=512 attr=2, projid32bit=1 00:10:14.159 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:14.159 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:14.159 data = bsize=4096 blocks=130560, imaxpct=25 00:10:14.159 = sunit=0 swidth=0 blks 00:10:14.159 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:14.159 log =internal log bsize=4096 blocks=16384, version=2 00:10:14.159 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:14.159 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:15.094 Discarding blocks...Done. 00:10:15.094 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:15.094 09:03:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71787 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.627 ************************************ 00:10:17.627 END TEST filesystem_xfs 00:10:17.627 ************************************ 00:10:17.627 00:10:17.627 real 0m3.253s 00:10:17.627 user 0m0.022s 00:10:17.627 sys 0m0.063s 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71787 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71787 ']' 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71787 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.627 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71787 00:10:17.627 killing process with pid 71787 00:10:17.628 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.628 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.628 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71787' 00:10:17.628 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71787 00:10:17.628 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71787 00:10:17.886 ************************************ 00:10:17.886 END TEST nvmf_filesystem_no_in_capsule 00:10:17.886 ************************************ 00:10:17.886 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:17.886 00:10:17.886 real 0m13.936s 00:10:17.886 user 0m53.175s 00:10:17.886 sys 0m1.980s 00:10:17.886 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.886 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.145 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:18.145 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.145 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.145 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.145 ************************************ 00:10:18.146 START TEST nvmf_filesystem_in_capsule 00:10:18.146 ************************************ 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=72141 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 72141 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 72141 ']' 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.146 09:03:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.146 [2024-11-20 09:03:56.906543] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:18.146 [2024-11-20 09:03:56.906645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.146 [2024-11-20 09:03:57.055033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.404 [2024-11-20 09:03:57.121014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.404 [2024-11-20 09:03:57.121074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.404 [2024-11-20 09:03:57.121087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.404 [2024-11-20 09:03:57.121096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.404 [2024-11-20 09:03:57.121103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.404 [2024-11-20 09:03:57.122308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.404 [2024-11-20 09:03:57.122392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.404 [2024-11-20 09:03:57.122525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.404 [2024-11-20 09:03:57.122527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.340 [2024-11-20 09:03:57.978316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.340 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.340 Malloc1 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.340 [2024-11-20 09:03:58.151173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:19.340 { 00:10:19.340 "aliases": [ 00:10:19.340 "4a188275-83ee-4426-912a-bc9b6f450266" 00:10:19.340 ], 00:10:19.340 "assigned_rate_limits": { 00:10:19.340 "r_mbytes_per_sec": 0, 00:10:19.340 "rw_ios_per_sec": 0, 00:10:19.340 "rw_mbytes_per_sec": 0, 00:10:19.340 "w_mbytes_per_sec": 0 00:10:19.340 }, 00:10:19.340 "block_size": 512, 00:10:19.340 "claim_type": "exclusive_write", 00:10:19.340 "claimed": true, 00:10:19.340 "driver_specific": {}, 00:10:19.340 "memory_domains": [ 00:10:19.340 { 00:10:19.340 "dma_device_id": "system", 00:10:19.340 "dma_device_type": 1 00:10:19.340 }, 00:10:19.340 { 00:10:19.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.340 "dma_device_type": 2 00:10:19.340 } 00:10:19.340 ], 00:10:19.340 "name": "Malloc1", 00:10:19.340 "num_blocks": 1048576, 00:10:19.340 "product_name": "Malloc disk", 00:10:19.340 "supported_io_types": { 00:10:19.340 "abort": true, 00:10:19.340 "compare": false, 00:10:19.340 "compare_and_write": false, 00:10:19.340 "copy": true, 00:10:19.340 "flush": true, 00:10:19.340 "get_zone_info": false, 00:10:19.340 "nvme_admin": false, 00:10:19.340 "nvme_io": false, 00:10:19.340 "nvme_io_md": false, 00:10:19.340 "nvme_iov_md": false, 00:10:19.340 "read": true, 00:10:19.340 "reset": true, 00:10:19.340 "seek_data": false, 00:10:19.340 "seek_hole": false, 00:10:19.340 "unmap": true, 00:10:19.340 "write": true, 00:10:19.340 "write_zeroes": true, 00:10:19.340 "zcopy": true, 00:10:19.340 "zone_append": false, 00:10:19.340 "zone_management": false 00:10:19.340 }, 00:10:19.340 "uuid": "4a188275-83ee-4426-912a-bc9b6f450266", 00:10:19.340 "zoned": false 00:10:19.340 } 00:10:19.340 ]' 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:19.340 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:19.599 09:03:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:22.130 09:04:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.064 ************************************ 00:10:23.064 START TEST filesystem_in_capsule_ext4 00:10:23.064 ************************************ 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:23.064 mke2fs 1.47.0 (5-Feb-2023) 00:10:23.064 Discarding device blocks: 0/522240 done 00:10:23.064 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:23.064 Filesystem UUID: 33836991-4ad0-4d9a-9195-90f5122182da 00:10:23.064 Superblock backups stored on blocks: 00:10:23.064 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:23.064 00:10:23.064 Allocating group tables: 0/64 done 00:10:23.064 Writing inode tables: 0/64 done 00:10:23.064 Creating journal (8192 blocks): done 00:10:23.064 Writing superblocks and filesystem accounting information: 0/64 done 00:10:23.064 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:23.064 09:04:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 72141 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.352 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.611 00:10:28.611 real 0m5.647s 00:10:28.611 user 0m0.029s 00:10:28.611 sys 0m0.059s 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:28.611 ************************************ 00:10:28.611 END TEST filesystem_in_capsule_ext4 00:10:28.611 ************************************ 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.611 ************************************ 00:10:28.611 START TEST filesystem_in_capsule_btrfs 00:10:28.611 ************************************ 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:28.611 btrfs-progs v6.8.1 00:10:28.611 See https://btrfs.readthedocs.io for more information. 00:10:28.611 00:10:28.611 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:28.611 NOTE: several default settings have changed in version 5.15, please make sure 00:10:28.611 this does not affect your deployments: 00:10:28.611 - DUP for metadata (-m dup) 00:10:28.611 - enabled no-holes (-O no-holes) 00:10:28.611 - enabled free-space-tree (-R free-space-tree) 00:10:28.611 00:10:28.611 Label: (null) 00:10:28.611 UUID: 836d1c42-b53a-4aa9-a1f7-01ac3d056683 00:10:28.611 Node size: 16384 00:10:28.611 Sector size: 4096 (CPU page size: 4096) 00:10:28.611 Filesystem size: 510.00MiB 00:10:28.611 Block group profiles: 00:10:28.611 Data: single 8.00MiB 00:10:28.611 Metadata: DUP 32.00MiB 00:10:28.611 System: DUP 8.00MiB 00:10:28.611 SSD detected: yes 00:10:28.611 Zoned device: no 00:10:28.611 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:28.611 Checksum: crc32c 00:10:28.611 Number of devices: 1 00:10:28.611 Devices: 00:10:28.611 ID SIZE PATH 00:10:28.611 1 510.00MiB /dev/nvme0n1p1 00:10:28.611 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:28.611 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 72141 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.870 00:10:28.870 real 0m0.223s 00:10:28.870 user 0m0.022s 00:10:28.870 sys 0m0.057s 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:28.870 ************************************ 00:10:28.870 END TEST filesystem_in_capsule_btrfs 00:10:28.870 ************************************ 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.870 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.871 ************************************ 00:10:28.871 START TEST filesystem_in_capsule_xfs 00:10:28.871 ************************************ 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:28.871 09:04:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:28.871 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:28.871 = sectsz=512 attr=2, projid32bit=1 00:10:28.871 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:28.871 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:28.871 data = bsize=4096 blocks=130560, imaxpct=25 00:10:28.871 = sunit=0 swidth=0 blks 00:10:28.871 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:28.871 log =internal log bsize=4096 blocks=16384, version=2 00:10:28.871 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:28.871 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:29.807 Discarding blocks...Done. 00:10:29.807 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:29.807 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 72141 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:31.777 00:10:31.777 real 0m2.619s 00:10:31.777 user 0m0.031s 00:10:31.777 sys 0m0.041s 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:31.777 ************************************ 00:10:31.777 END TEST filesystem_in_capsule_xfs 00:10:31.777 ************************************ 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 72141 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 72141 ']' 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 72141 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72141 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.777 killing process with pid 72141 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72141' 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 72141 00:10:31.777 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 72141 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:32.036 00:10:32.036 real 0m13.965s 00:10:32.036 user 0m53.551s 00:10:32.036 sys 0m2.016s 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.036 ************************************ 00:10:32.036 END TEST nvmf_filesystem_in_capsule 00:10:32.036 ************************************ 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:32.036 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:32.036 rmmod nvme_tcp 00:10:32.036 rmmod nvme_fabrics 00:10:32.036 rmmod nvme_keyring 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@254 -- # local dev 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:10:32.294 09:04:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # continue 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # continue 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@274 -- # iptr 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-save 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-restore 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:32.294 00:10:32.294 real 0m29.180s 00:10:32.294 user 1m47.235s 00:10:32.294 sys 0m4.549s 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:32.294 ************************************ 00:10:32.294 END TEST nvmf_filesystem 00:10:32.294 ************************************ 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.294 ************************************ 00:10:32.294 START TEST nvmf_target_discovery 00:10:32.294 ************************************ 00:10:32.294 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:32.555 * Looking for test storage... 00:10:32.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.555 --rc genhtml_branch_coverage=1 00:10:32.555 --rc genhtml_function_coverage=1 00:10:32.555 --rc genhtml_legend=1 00:10:32.555 --rc geninfo_all_blocks=1 00:10:32.555 --rc geninfo_unexecuted_blocks=1 00:10:32.555 00:10:32.555 ' 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.555 --rc genhtml_branch_coverage=1 00:10:32.555 --rc genhtml_function_coverage=1 00:10:32.555 --rc genhtml_legend=1 00:10:32.555 --rc geninfo_all_blocks=1 00:10:32.555 --rc geninfo_unexecuted_blocks=1 00:10:32.555 00:10:32.555 ' 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.555 --rc genhtml_branch_coverage=1 00:10:32.555 --rc genhtml_function_coverage=1 00:10:32.555 --rc genhtml_legend=1 00:10:32.555 --rc geninfo_all_blocks=1 00:10:32.555 --rc geninfo_unexecuted_blocks=1 00:10:32.555 00:10:32.555 ' 00:10:32.555 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.556 --rc genhtml_branch_coverage=1 00:10:32.556 --rc genhtml_function_coverage=1 00:10:32.556 --rc genhtml_legend=1 00:10:32.556 --rc geninfo_all_blocks=1 00:10:32.556 --rc geninfo_unexecuted_blocks=1 00:10:32.556 00:10:32.556 ' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:32.556 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # nvmftestinit 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # return 0 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:32.556 10.0.0.1 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:10:32.556 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:10:32.817 10.0.0.2 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:32.817 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:10:32.818 10.0.0.3 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:10:32.818 10.0.0.4 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:10:32.818 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:32.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:10:32.819 00:10:32.819 --- 10.0.0.1 ping statistics --- 00:10:32.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.819 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:32.819 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:32.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:10:32.820 00:10:32.820 --- 10.0.0.2 ping statistics --- 00:10:32.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.820 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:10:32.820 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:10:33.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:33.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:10:33.080 00:10:33.080 --- 10.0.0.3 ping statistics --- 00:10:33.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.080 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:10:33.080 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:33.080 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.130 ms 00:10:33.080 00:10:33.080 --- 10.0.0.4 ping statistics --- 00:10:33.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.080 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # return 0 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:33.080 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target0 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo target1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@16 -- # nvmfappstart -m 0xF 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=72729 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 72729 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72729 ']' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.081 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:33.081 [2024-11-20 09:04:11.914051] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:33.081 [2024-11-20 09:04:11.914176] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.340 [2024-11-20 09:04:12.071992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.340 [2024-11-20 09:04:12.141489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.340 [2024-11-20 09:04:12.141553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.340 [2024-11-20 09:04:12.141568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.340 [2024-11-20 09:04:12.141578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.340 [2024-11-20 09:04:12.141587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.340 [2024-11-20 09:04:12.142893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.340 [2024-11-20 09:04:12.142940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.340 [2024-11-20 09:04:12.143021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.340 [2024-11-20 09:04:12.143027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.275 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.275 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:34.276 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:34.276 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.276 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 [2024-11-20 09:04:13.015246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # seq 1 4 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 Null1 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 [2024-11-20 09:04:13.059394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 Null2 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 Null3 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 Null4 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:34.276 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.277 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 4420 00:10:34.536 00:10:34.536 Discovery Log Number of Records 6, Generation counter 6 00:10:34.536 =====Discovery Log Entry 0====== 00:10:34.536 trtype: tcp 00:10:34.536 adrfam: ipv4 00:10:34.536 subtype: current discovery subsystem 00:10:34.536 treq: not required 00:10:34.536 portid: 0 00:10:34.536 trsvcid: 4420 00:10:34.536 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:34.536 traddr: 10.0.0.2 00:10:34.536 eflags: explicit discovery connections, duplicate discovery information 00:10:34.536 sectype: none 00:10:34.536 =====Discovery Log Entry 1====== 00:10:34.536 trtype: tcp 00:10:34.536 adrfam: ipv4 00:10:34.536 subtype: nvme subsystem 00:10:34.536 treq: not required 00:10:34.536 portid: 0 00:10:34.536 trsvcid: 4420 00:10:34.536 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:34.536 traddr: 10.0.0.2 00:10:34.536 eflags: none 00:10:34.536 sectype: none 00:10:34.536 =====Discovery Log Entry 2====== 00:10:34.536 trtype: tcp 00:10:34.536 adrfam: ipv4 00:10:34.536 subtype: nvme subsystem 00:10:34.536 treq: not required 00:10:34.536 portid: 0 00:10:34.536 trsvcid: 4420 00:10:34.536 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:34.536 traddr: 10.0.0.2 00:10:34.536 eflags: none 00:10:34.536 sectype: none 00:10:34.536 =====Discovery Log Entry 3====== 00:10:34.536 trtype: tcp 00:10:34.536 adrfam: ipv4 00:10:34.536 subtype: nvme subsystem 00:10:34.536 treq: not required 00:10:34.536 portid: 0 00:10:34.536 trsvcid: 4420 00:10:34.536 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:34.536 traddr: 10.0.0.2 00:10:34.536 eflags: none 00:10:34.536 sectype: none 00:10:34.536 =====Discovery Log Entry 4====== 00:10:34.536 trtype: tcp 00:10:34.536 adrfam: ipv4 00:10:34.536 subtype: nvme subsystem 00:10:34.536 treq: not required 00:10:34.536 portid: 0 00:10:34.536 trsvcid: 4420 00:10:34.536 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:34.536 traddr: 10.0.0.2 00:10:34.536 eflags: none 00:10:34.536 sectype: none 00:10:34.536 =====Discovery Log Entry 5====== 00:10:34.536 trtype: tcp 00:10:34.536 adrfam: ipv4 00:10:34.536 subtype: discovery subsystem referral 00:10:34.536 treq: not required 00:10:34.536 portid: 0 00:10:34.536 trsvcid: 4430 00:10:34.536 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:34.536 traddr: 10.0.0.2 00:10:34.536 eflags: none 00:10:34.536 sectype: none 00:10:34.536 Perform nvmf subsystem discovery via RPC 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@34 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_get_subsystems 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.536 [ 00:10:34.536 { 00:10:34.536 "allow_any_host": true, 00:10:34.536 "hosts": [], 00:10:34.536 "listen_addresses": [ 00:10:34.536 { 00:10:34.536 "adrfam": "IPv4", 00:10:34.536 "traddr": "10.0.0.2", 00:10:34.536 "trsvcid": "4420", 00:10:34.536 "trtype": "TCP" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:34.536 "subtype": "Discovery" 00:10:34.536 }, 00:10:34.536 { 00:10:34.536 "allow_any_host": true, 00:10:34.536 "hosts": [], 00:10:34.536 "listen_addresses": [ 00:10:34.536 { 00:10:34.536 "adrfam": "IPv4", 00:10:34.536 "traddr": "10.0.0.2", 00:10:34.536 "trsvcid": "4420", 00:10:34.536 "trtype": "TCP" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "max_cntlid": 65519, 00:10:34.536 "max_namespaces": 32, 00:10:34.536 "min_cntlid": 1, 00:10:34.536 "model_number": "SPDK bdev Controller", 00:10:34.536 "namespaces": [ 00:10:34.536 { 00:10:34.536 "bdev_name": "Null1", 00:10:34.536 "name": "Null1", 00:10:34.536 "nguid": "33DA0520054F41E8A087E8EDB50E01DA", 00:10:34.536 "nsid": 1, 00:10:34.536 "uuid": "33da0520-054f-41e8-a087-e8edb50e01da" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:34.536 "serial_number": "SPDK00000000000001", 00:10:34.536 "subtype": "NVMe" 00:10:34.536 }, 00:10:34.536 { 00:10:34.536 "allow_any_host": true, 00:10:34.536 "hosts": [], 00:10:34.536 "listen_addresses": [ 00:10:34.536 { 00:10:34.536 "adrfam": "IPv4", 00:10:34.536 "traddr": "10.0.0.2", 00:10:34.536 "trsvcid": "4420", 00:10:34.536 "trtype": "TCP" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "max_cntlid": 65519, 00:10:34.536 "max_namespaces": 32, 00:10:34.536 "min_cntlid": 1, 00:10:34.536 "model_number": "SPDK bdev Controller", 00:10:34.536 "namespaces": [ 00:10:34.536 { 00:10:34.536 "bdev_name": "Null2", 00:10:34.536 "name": "Null2", 00:10:34.536 "nguid": "4A8288A392044DE98A7B9CB99E3AA645", 00:10:34.536 "nsid": 1, 00:10:34.536 "uuid": "4a8288a3-9204-4de9-8a7b-9cb99e3aa645" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:34.536 "serial_number": "SPDK00000000000002", 00:10:34.536 "subtype": "NVMe" 00:10:34.536 }, 00:10:34.536 { 00:10:34.536 "allow_any_host": true, 00:10:34.536 "hosts": [], 00:10:34.536 "listen_addresses": [ 00:10:34.536 { 00:10:34.536 "adrfam": "IPv4", 00:10:34.536 "traddr": "10.0.0.2", 00:10:34.536 "trsvcid": "4420", 00:10:34.536 "trtype": "TCP" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "max_cntlid": 65519, 00:10:34.536 "max_namespaces": 32, 00:10:34.536 "min_cntlid": 1, 00:10:34.536 "model_number": "SPDK bdev Controller", 00:10:34.536 "namespaces": [ 00:10:34.536 { 00:10:34.536 "bdev_name": "Null3", 00:10:34.536 "name": "Null3", 00:10:34.536 "nguid": "04E670C846914AD28E501F440F26835A", 00:10:34.536 "nsid": 1, 00:10:34.536 "uuid": "04e670c8-4691-4ad2-8e50-1f440f26835a" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:34.536 "serial_number": "SPDK00000000000003", 00:10:34.536 "subtype": "NVMe" 00:10:34.536 }, 00:10:34.536 { 00:10:34.536 "allow_any_host": true, 00:10:34.536 "hosts": [], 00:10:34.536 "listen_addresses": [ 00:10:34.536 { 00:10:34.536 "adrfam": "IPv4", 00:10:34.536 "traddr": "10.0.0.2", 00:10:34.536 "trsvcid": "4420", 00:10:34.536 "trtype": "TCP" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "max_cntlid": 65519, 00:10:34.536 "max_namespaces": 32, 00:10:34.536 "min_cntlid": 1, 00:10:34.536 "model_number": "SPDK bdev Controller", 00:10:34.536 "namespaces": [ 00:10:34.536 { 00:10:34.536 "bdev_name": "Null4", 00:10:34.536 "name": "Null4", 00:10:34.536 "nguid": "0B71641093E34908A5131003FD79B91B", 00:10:34.536 "nsid": 1, 00:10:34.536 "uuid": "0b716410-93e3-4908-a513-1003fd79b91b" 00:10:34.536 } 00:10:34.536 ], 00:10:34.536 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:34.536 "serial_number": "SPDK00000000000004", 00:10:34.536 "subtype": "NVMe" 00:10:34.536 } 00:10:34.536 ] 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # seq 1 4 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null1 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null2 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.536 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null3 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null4 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_get_bdevs 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # jq -r '.[].name' 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # check_bdevs= 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@45 -- # '[' -n '' ']' 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@52 -- # nvmftestfini 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:34.537 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:34.796 rmmod nvme_tcp 00:10:34.796 rmmod nvme_fabrics 00:10:34.796 rmmod nvme_keyring 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 72729 ']' 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 72729 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72729 ']' 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72729 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72729 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.796 killing process with pid 72729 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72729' 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72729 00:10:34.796 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72729 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@254 -- # local dev 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # continue 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # continue 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@274 -- # iptr 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-save 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:10:35.055 00:10:35.055 real 0m2.776s 00:10:35.055 user 0m7.041s 00:10:35.055 sys 0m0.762s 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:35.055 ************************************ 00:10:35.055 END TEST nvmf_target_discovery 00:10:35.055 ************************************ 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.055 09:04:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.315 ************************************ 00:10:35.315 START TEST nvmf_referrals 00:10:35.315 ************************************ 00:10:35.315 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:35.315 * Looking for test storage... 00:10:35.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:35.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.315 --rc genhtml_branch_coverage=1 00:10:35.315 --rc genhtml_function_coverage=1 00:10:35.315 --rc genhtml_legend=1 00:10:35.315 --rc geninfo_all_blocks=1 00:10:35.315 --rc geninfo_unexecuted_blocks=1 00:10:35.315 00:10:35.315 ' 00:10:35.315 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:35.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.315 --rc genhtml_branch_coverage=1 00:10:35.315 --rc genhtml_function_coverage=1 00:10:35.315 --rc genhtml_legend=1 00:10:35.315 --rc geninfo_all_blocks=1 00:10:35.315 --rc geninfo_unexecuted_blocks=1 00:10:35.315 00:10:35.316 ' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:35.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.316 --rc genhtml_branch_coverage=1 00:10:35.316 --rc genhtml_function_coverage=1 00:10:35.316 --rc genhtml_legend=1 00:10:35.316 --rc geninfo_all_blocks=1 00:10:35.316 --rc geninfo_unexecuted_blocks=1 00:10:35.316 00:10:35.316 ' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:35.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:35.316 --rc genhtml_branch_coverage=1 00:10:35.316 --rc genhtml_function_coverage=1 00:10:35.316 --rc genhtml_legend=1 00:10:35.316 --rc geninfo_all_blocks=1 00:10:35.316 --rc geninfo_unexecuted_blocks=1 00:10:35.316 00:10:35.316 ' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:35.316 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@280 -- # nvmf_veth_init 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@223 -- # create_target_ns 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # create_main_bridge 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@105 -- # delete_main_bridge 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # return 0 00:10:35.316 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up initiator0 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up target0 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target0 up 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up target0_br 00:10:35.317 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns target0 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:35.577 10.0.0.1 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:10:35.577 10.0.0.2 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up initiator0 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up target0_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up initiator1 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@151 -- # set_up target1 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:10:35.577 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target1 up 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # set_up target1_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns target1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772163 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:10:35.578 10.0.0.3 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772164 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:10:35.578 10.0.0.4 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up initiator1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@129 -- # set_up target1_br 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 2 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator0 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:35.578 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:35.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:10:35.838 00:10:35.838 --- 10.0.0.1 ping statistics --- 00:10:35.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.838 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target0 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target0 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:35.838 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:35.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:10:35.839 00:10:35.839 --- 10.0.0.2 ping statistics --- 00:10:35.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.839 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:10:35.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:10:35.839 00:10:35.839 --- 10.0.0.3 ping statistics --- 00:10:35.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.839 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:10:35.839 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:35.839 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:10:35.839 00:10:35.839 --- 10.0.0.4 ping statistics --- 00:10:35.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.839 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # return 0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:35.839 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target0 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target0 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo target1 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=target1 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=73008 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 73008 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 73008 ']' 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.840 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:35.840 [2024-11-20 09:04:14.723530] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:35.840 [2024-11-20 09:04:14.724297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.099 [2024-11-20 09:04:14.877169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:36.099 [2024-11-20 09:04:14.948291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:36.099 [2024-11-20 09:04:14.948354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:36.099 [2024-11-20 09:04:14.948368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:36.099 [2024-11-20 09:04:14.948378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:36.099 [2024-11-20 09:04:14.948388] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:36.099 [2024-11-20 09:04:14.949621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.099 [2024-11-20 09:04:14.949717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:36.099 [2024-11-20 09:04:14.949831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:36.099 [2024-11-20 09:04:14.949833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 [2024-11-20 09:04:15.134443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 [2024-11-20 09:04:15.150871] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:36.358 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:36.617 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:36.876 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.136 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.395 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:37.654 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:37.913 rmmod nvme_tcp 00:10:37.913 rmmod nvme_fabrics 00:10:37.913 rmmod nvme_keyring 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 73008 ']' 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 73008 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 73008 ']' 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 73008 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:37.913 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73008 00:10:38.170 killing process with pid 73008 00:10:38.170 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.170 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.170 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73008' 00:10:38.170 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 73008 00:10:38.170 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 73008 00:10:38.170 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:38.170 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:10:38.170 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@254 -- # local dev 00:10:38.170 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:38.170 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:38.170 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:38.170 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:38.171 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:38.171 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:38.171 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:10:38.171 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:10:38.171 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:38.171 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:10:38.171 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # continue 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # continue 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@274 -- # iptr 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-save 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-restore 00:10:38.429 ************************************ 00:10:38.429 END TEST nvmf_referrals 00:10:38.429 ************************************ 00:10:38.429 00:10:38.429 real 0m3.218s 00:10:38.429 user 0m9.309s 00:10:38.429 sys 0m1.040s 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.429 ************************************ 00:10:38.429 START TEST nvmf_connect_disconnect 00:10:38.429 ************************************ 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:38.429 * Looking for test storage... 00:10:38.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.429 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.690 --rc genhtml_branch_coverage=1 00:10:38.690 --rc genhtml_function_coverage=1 00:10:38.690 --rc genhtml_legend=1 00:10:38.690 --rc geninfo_all_blocks=1 00:10:38.690 --rc geninfo_unexecuted_blocks=1 00:10:38.690 00:10:38.690 ' 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.690 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:38.691 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@280 -- # nvmf_veth_init 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@223 -- # create_target_ns 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # create_main_bridge 00:10:38.691 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@105 -- # delete_main_bridge 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # return 0 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up initiator0 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up target0 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target0 up 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up target0_br 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns target0 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:10:38.692 10.0.0.1 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:10:38.692 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:38.692 10.0.0.2 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up initiator0 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up target0_br 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:38.693 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up initiator1 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@151 -- # set_up target1 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target1 up 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # set_up target1_br 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns target1 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:10:38.953 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772163 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:10:38.954 10.0.0.3 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772164 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:10:38.954 10.0.0.4 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up initiator1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@129 -- # set_up target1_br 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 2 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator0 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.954 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:38.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:10:38.955 00:10:38.955 --- 10.0.0.1 ping statistics --- 00:10:38.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.955 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target0 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target0 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:38.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:10:38.955 00:10:38.955 --- 10.0.0.2 ping statistics --- 00:10:38.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.955 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:10:38.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:10:38.955 00:10:38.955 --- 10.0.0.3 ping statistics --- 00:10:38.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.955 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:10:38.955 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:10:38.955 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:38.955 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:10:38.955 00:10:38.955 --- 10.0.0.4 ping statistics --- 00:10:38.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.955 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # return 0 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator0 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:38.956 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo initiator1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target0 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target0 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo target1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=target1 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=73359 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 73359 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 73359 ']' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.216 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:39.216 [2024-11-20 09:04:18.001344] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:39.216 [2024-11-20 09:04:18.001471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.475 [2024-11-20 09:04:18.149538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.475 [2024-11-20 09:04:18.216717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.475 [2024-11-20 09:04:18.216798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.475 [2024-11-20 09:04:18.216811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.475 [2024-11-20 09:04:18.216820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.475 [2024-11-20 09:04:18.216827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.475 [2024-11-20 09:04:18.218042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.475 [2024-11-20 09:04:18.218146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.475 [2024-11-20 09:04:18.218228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.475 [2024-11-20 09:04:18.218230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.410 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 [2024-11-20 09:04:19.051925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:40.411 [2024-11-20 09:04:19.120065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:40.411 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:42.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:51.929 rmmod nvme_tcp 00:10:51.929 rmmod nvme_fabrics 00:10:51.929 rmmod nvme_keyring 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 73359 ']' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 73359 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 73359 ']' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 73359 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73359 00:10:51.929 killing process with pid 73359 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73359' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 73359 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 73359 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@254 -- # local dev 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:10:51.929 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # continue 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # continue 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@274 -- # iptr 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:10:52.207 ************************************ 00:10:52.207 END TEST nvmf_connect_disconnect 00:10:52.207 ************************************ 00:10:52.207 00:10:52.207 real 0m13.682s 00:10:52.207 user 0m49.276s 00:10:52.207 sys 0m2.037s 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.207 09:04:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.207 ************************************ 00:10:52.208 START TEST nvmf_multitarget 00:10:52.208 ************************************ 00:10:52.208 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:52.208 * Looking for test storage... 00:10:52.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:52.208 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:52.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.468 --rc genhtml_branch_coverage=1 00:10:52.468 --rc genhtml_function_coverage=1 00:10:52.468 --rc genhtml_legend=1 00:10:52.468 --rc geninfo_all_blocks=1 00:10:52.468 --rc geninfo_unexecuted_blocks=1 00:10:52.468 00:10:52.468 ' 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:52.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.468 --rc genhtml_branch_coverage=1 00:10:52.468 --rc genhtml_function_coverage=1 00:10:52.468 --rc genhtml_legend=1 00:10:52.468 --rc geninfo_all_blocks=1 00:10:52.468 --rc geninfo_unexecuted_blocks=1 00:10:52.468 00:10:52.468 ' 00:10:52.468 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:52.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.468 --rc genhtml_branch_coverage=1 00:10:52.469 --rc genhtml_function_coverage=1 00:10:52.469 --rc genhtml_legend=1 00:10:52.469 --rc geninfo_all_blocks=1 00:10:52.469 --rc geninfo_unexecuted_blocks=1 00:10:52.469 00:10:52.469 ' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:52.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.469 --rc genhtml_branch_coverage=1 00:10:52.469 --rc genhtml_function_coverage=1 00:10:52.469 --rc genhtml_legend=1 00:10:52.469 --rc geninfo_all_blocks=1 00:10:52.469 --rc geninfo_unexecuted_blocks=1 00:10:52.469 00:10:52.469 ' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:52.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@280 -- # nvmf_veth_init 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@223 -- # create_target_ns 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # create_main_bridge 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@105 -- # delete_main_bridge 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # return 0 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:52.469 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up initiator0 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up target0 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target0 up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up target0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns target0 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:52.470 10.0.0.1 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:52.470 10.0.0.2 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up initiator0 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up target0_br 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:52.470 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up initiator1 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@151 -- # set_up target1 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target1 up 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # set_up target1_br 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns target1 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:10:52.471 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772163 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:10:52.731 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:10:52.732 10.0.0.3 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772164 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:10:52.732 10.0.0.4 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up initiator1 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@129 -- # set_up target1_br 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 2 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator0 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:52.732 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:52.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:10:52.733 00:10:52.733 --- 10.0.0.1 ping statistics --- 00:10:52.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.733 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target0 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target0 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:52.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:10:52.733 00:10:52.733 --- 10.0.0.2 ping statistics --- 00:10:52.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.733 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:10:52.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:52.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:52.733 00:10:52.733 --- 10.0.0.3 ping statistics --- 00:10:52.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.733 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target1 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:52.733 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:10:52.734 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:52.734 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:10:52.734 00:10:52.734 --- 10.0.0.4 ping statistics --- 00:10:52.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.734 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # return 0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo initiator1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target0 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:52.734 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo target1 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=target1 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=73809 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 73809 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73809 ']' 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.735 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:52.994 [2024-11-20 09:04:31.697015] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:52.994 [2024-11-20 09:04:31.697112] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.994 [2024-11-20 09:04:31.843387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:52.994 [2024-11-20 09:04:31.904648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.994 [2024-11-20 09:04:31.904714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.994 [2024-11-20 09:04:31.904748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.994 [2024-11-20 09:04:31.904756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.994 [2024-11-20 09:04:31.904779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.995 [2024-11-20 09:04:31.906066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.995 [2024-11-20 09:04:31.906156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.995 [2024-11-20 09:04:31.906312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.995 [2024-11-20 09:04:31.906333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.930 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:53.931 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:54.190 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:54.190 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:54.190 "nvmf_tgt_1" 00:10:54.190 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:54.449 "nvmf_tgt_2" 00:10:54.449 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:54.449 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:54.449 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:54.449 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:54.707 true 00:10:54.707 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:54.707 true 00:10:54.707 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:54.707 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:54.965 rmmod nvme_tcp 00:10:54.965 rmmod nvme_fabrics 00:10:54.965 rmmod nvme_keyring 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:54.965 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 73809 ']' 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 73809 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73809 ']' 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73809 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73809 00:10:54.966 killing process with pid 73809 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73809' 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73809 00:10:54.966 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73809 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@254 -- # local dev 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:10:55.225 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # continue 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # continue 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@274 -- # iptr 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-save 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-restore 00:10:55.484 00:10:55.484 real 0m3.258s 00:10:55.484 user 0m9.939s 00:10:55.484 sys 0m0.844s 00:10:55.484 ************************************ 00:10:55.484 END TEST nvmf_multitarget 00:10:55.484 ************************************ 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:55.484 ************************************ 00:10:55.484 START TEST nvmf_rpc 00:10:55.484 ************************************ 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:55.484 * Looking for test storage... 00:10:55.484 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:55.484 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:55.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.745 --rc genhtml_branch_coverage=1 00:10:55.745 --rc genhtml_function_coverage=1 00:10:55.745 --rc genhtml_legend=1 00:10:55.745 --rc geninfo_all_blocks=1 00:10:55.745 --rc geninfo_unexecuted_blocks=1 00:10:55.745 00:10:55.745 ' 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:55.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.745 --rc genhtml_branch_coverage=1 00:10:55.745 --rc genhtml_function_coverage=1 00:10:55.745 --rc genhtml_legend=1 00:10:55.745 --rc geninfo_all_blocks=1 00:10:55.745 --rc geninfo_unexecuted_blocks=1 00:10:55.745 00:10:55.745 ' 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:55.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.745 --rc genhtml_branch_coverage=1 00:10:55.745 --rc genhtml_function_coverage=1 00:10:55.745 --rc genhtml_legend=1 00:10:55.745 --rc geninfo_all_blocks=1 00:10:55.745 --rc geninfo_unexecuted_blocks=1 00:10:55.745 00:10:55.745 ' 00:10:55.745 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:55.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.745 --rc genhtml_branch_coverage=1 00:10:55.745 --rc genhtml_function_coverage=1 00:10:55.745 --rc genhtml_legend=1 00:10:55.745 --rc geninfo_all_blocks=1 00:10:55.745 --rc geninfo_unexecuted_blocks=1 00:10:55.745 00:10:55.745 ' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:55.746 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@280 -- # nvmf_veth_init 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@223 -- # create_target_ns 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # create_main_bridge 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@105 -- # delete_main_bridge 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # return 0 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:55.746 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up initiator0 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up target0 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target0 up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up target0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns target0 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:10:55.747 10.0.0.1 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:10:55.747 10.0.0.2 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up initiator0 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up target0_br 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:10:55.747 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up initiator1 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@151 -- # set_up target1 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target1 up 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # set_up target1_br 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns target1 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772163 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:10:56.007 10.0.0.3 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:10:56.007 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772164 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:10:56.008 10.0.0.4 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up initiator1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@129 -- # set_up target1_br 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 2 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:56.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:10:56.008 00:10:56.008 --- 10.0.0.1 ping statistics --- 00:10:56.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.008 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target0 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:56.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:10:56.008 00:10:56.008 --- 10.0.0.2 ping statistics --- 00:10:56.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.008 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:56.008 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:10:56.009 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:56.009 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:56.009 00:10:56.009 --- 10.0.0.3 ping statistics --- 00:10:56.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.009 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:10:56.009 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:56.009 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:10:56.009 00:10:56.009 --- 10.0.0.4 ping statistics --- 00:10:56.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.009 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # return 0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=initiator1 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target0 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:56.009 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo target1 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=target1 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=74092 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 74092 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 74092 ']' 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.268 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.268 [2024-11-20 09:04:35.033382] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:56.268 [2024-11-20 09:04:35.033496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.527 [2024-11-20 09:04:35.186694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.527 [2024-11-20 09:04:35.253831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.527 [2024-11-20 09:04:35.253887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.527 [2024-11-20 09:04:35.253916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.527 [2024-11-20 09:04:35.253927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.527 [2024-11-20 09:04:35.253936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.527 [2024-11-20 09:04:35.255244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.527 [2024-11-20 09:04:35.255390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.527 [2024-11-20 09:04:35.255475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.527 [2024-11-20 09:04:35.255476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.527 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:56.785 "poll_groups": [ 00:10:56.785 { 00:10:56.785 "admin_qpairs": 0, 00:10:56.785 "completed_nvme_io": 0, 00:10:56.785 "current_admin_qpairs": 0, 00:10:56.785 "current_io_qpairs": 0, 00:10:56.785 "io_qpairs": 0, 00:10:56.785 "name": "nvmf_tgt_poll_group_000", 00:10:56.785 "pending_bdev_io": 0, 00:10:56.785 "transports": [] 00:10:56.785 }, 00:10:56.785 { 00:10:56.785 "admin_qpairs": 0, 00:10:56.785 "completed_nvme_io": 0, 00:10:56.785 "current_admin_qpairs": 0, 00:10:56.785 "current_io_qpairs": 0, 00:10:56.785 "io_qpairs": 0, 00:10:56.785 "name": "nvmf_tgt_poll_group_001", 00:10:56.785 "pending_bdev_io": 0, 00:10:56.785 "transports": [] 00:10:56.785 }, 00:10:56.785 { 00:10:56.785 "admin_qpairs": 0, 00:10:56.785 "completed_nvme_io": 0, 00:10:56.785 "current_admin_qpairs": 0, 00:10:56.785 "current_io_qpairs": 0, 00:10:56.785 "io_qpairs": 0, 00:10:56.785 "name": "nvmf_tgt_poll_group_002", 00:10:56.785 "pending_bdev_io": 0, 00:10:56.785 "transports": [] 00:10:56.785 }, 00:10:56.785 { 00:10:56.785 "admin_qpairs": 0, 00:10:56.785 "completed_nvme_io": 0, 00:10:56.785 "current_admin_qpairs": 0, 00:10:56.785 "current_io_qpairs": 0, 00:10:56.785 "io_qpairs": 0, 00:10:56.785 "name": "nvmf_tgt_poll_group_003", 00:10:56.785 "pending_bdev_io": 0, 00:10:56.785 "transports": [] 00:10:56.785 } 00:10:56.785 ], 00:10:56.785 "tick_rate": 2200000000 00:10:56.785 }' 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.785 [2024-11-20 09:04:35.563501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:56.785 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:56.786 "poll_groups": [ 00:10:56.786 { 00:10:56.786 "admin_qpairs": 0, 00:10:56.786 "completed_nvme_io": 0, 00:10:56.786 "current_admin_qpairs": 0, 00:10:56.786 "current_io_qpairs": 0, 00:10:56.786 "io_qpairs": 0, 00:10:56.786 "name": "nvmf_tgt_poll_group_000", 00:10:56.786 "pending_bdev_io": 0, 00:10:56.786 "transports": [ 00:10:56.786 { 00:10:56.786 "trtype": "TCP" 00:10:56.786 } 00:10:56.786 ] 00:10:56.786 }, 00:10:56.786 { 00:10:56.786 "admin_qpairs": 0, 00:10:56.786 "completed_nvme_io": 0, 00:10:56.786 "current_admin_qpairs": 0, 00:10:56.786 "current_io_qpairs": 0, 00:10:56.786 "io_qpairs": 0, 00:10:56.786 "name": "nvmf_tgt_poll_group_001", 00:10:56.786 "pending_bdev_io": 0, 00:10:56.786 "transports": [ 00:10:56.786 { 00:10:56.786 "trtype": "TCP" 00:10:56.786 } 00:10:56.786 ] 00:10:56.786 }, 00:10:56.786 { 00:10:56.786 "admin_qpairs": 0, 00:10:56.786 "completed_nvme_io": 0, 00:10:56.786 "current_admin_qpairs": 0, 00:10:56.786 "current_io_qpairs": 0, 00:10:56.786 "io_qpairs": 0, 00:10:56.786 "name": "nvmf_tgt_poll_group_002", 00:10:56.786 "pending_bdev_io": 0, 00:10:56.786 "transports": [ 00:10:56.786 { 00:10:56.786 "trtype": "TCP" 00:10:56.786 } 00:10:56.786 ] 00:10:56.786 }, 00:10:56.786 { 00:10:56.786 "admin_qpairs": 0, 00:10:56.786 "completed_nvme_io": 0, 00:10:56.786 "current_admin_qpairs": 0, 00:10:56.786 "current_io_qpairs": 0, 00:10:56.786 "io_qpairs": 0, 00:10:56.786 "name": "nvmf_tgt_poll_group_003", 00:10:56.786 "pending_bdev_io": 0, 00:10:56.786 "transports": [ 00:10:56.786 { 00:10:56.786 "trtype": "TCP" 00:10:56.786 } 00:10:56.786 ] 00:10:56.786 } 00:10:56.786 ], 00:10:56.786 "tick_rate": 2200000000 00:10:56.786 }' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:56.786 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.045 Malloc1 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.045 [2024-11-20 09:04:35.771127] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -a 10.0.0.2 -s 4420 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -a 10.0.0.2 -s 4420 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -a 10.0.0.2 -s 4420 00:10:57.045 [2024-11-20 09:04:35.799682] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468' 00:10:57.045 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:57.045 could not add new controller: failed to write to nvme-fabrics device 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.045 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.304 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.304 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:57.304 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.304 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:57.304 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.275 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.275 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.275 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.275 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.275 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.275 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:59.275 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:59.276 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.534 [2024-11-20 09:04:38.210678] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468' 00:10:59.534 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:59.534 could not add new controller: failed to write to nvme-fabrics device 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:59.534 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:02.066 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:02.066 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:02.066 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.066 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:02.066 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.066 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.067 [2024-11-20 09:04:40.513689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:02.067 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:03.969 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.228 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.229 [2024-11-20 09:04:42.925081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.229 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.229 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.229 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.229 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.229 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.229 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.761 [2024-11-20 09:04:45.257246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.761 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.762 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:08.665 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:08.665 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:08.665 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.665 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:08.665 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.665 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:08.665 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.923 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.923 [2024-11-20 09:04:47.669370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.923 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.182 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.182 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:09.182 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.182 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:09.182 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:11.080 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:11.080 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.080 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:11.080 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:11.080 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.080 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:11.080 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.337 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.338 [2024-11-20 09:04:50.098842] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.338 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.596 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.596 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.596 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.596 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:11.596 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.498 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.757 [2024-11-20 09:04:52.422065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.757 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 [2024-11-20 09:04:52.474059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 [2024-11-20 09:04:52.522092] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 [2024-11-20 09:04:52.570160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 [2024-11-20 09:04:52.618212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.758 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.017 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.017 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:14.017 "poll_groups": [ 00:11:14.017 { 00:11:14.017 "admin_qpairs": 2, 00:11:14.017 "completed_nvme_io": 66, 00:11:14.017 "current_admin_qpairs": 0, 00:11:14.017 "current_io_qpairs": 0, 00:11:14.017 "io_qpairs": 16, 00:11:14.017 "name": "nvmf_tgt_poll_group_000", 00:11:14.017 "pending_bdev_io": 0, 00:11:14.017 "transports": [ 00:11:14.017 { 00:11:14.017 "trtype": "TCP" 00:11:14.017 } 00:11:14.017 ] 00:11:14.017 }, 00:11:14.017 { 00:11:14.017 "admin_qpairs": 3, 00:11:14.017 "completed_nvme_io": 116, 00:11:14.017 "current_admin_qpairs": 0, 00:11:14.017 "current_io_qpairs": 0, 00:11:14.017 "io_qpairs": 17, 00:11:14.017 "name": "nvmf_tgt_poll_group_001", 00:11:14.017 "pending_bdev_io": 0, 00:11:14.017 "transports": [ 00:11:14.017 { 00:11:14.017 "trtype": "TCP" 00:11:14.017 } 00:11:14.017 ] 00:11:14.017 }, 00:11:14.017 { 00:11:14.017 "admin_qpairs": 1, 00:11:14.017 "completed_nvme_io": 119, 00:11:14.017 "current_admin_qpairs": 0, 00:11:14.017 "current_io_qpairs": 0, 00:11:14.017 "io_qpairs": 19, 00:11:14.017 "name": "nvmf_tgt_poll_group_002", 00:11:14.017 "pending_bdev_io": 0, 00:11:14.017 "transports": [ 00:11:14.017 { 00:11:14.017 "trtype": "TCP" 00:11:14.017 } 00:11:14.017 ] 00:11:14.017 }, 00:11:14.017 { 00:11:14.017 "admin_qpairs": 1, 00:11:14.017 "completed_nvme_io": 119, 00:11:14.017 "current_admin_qpairs": 0, 00:11:14.017 "current_io_qpairs": 0, 00:11:14.017 "io_qpairs": 18, 00:11:14.017 "name": "nvmf_tgt_poll_group_003", 00:11:14.017 "pending_bdev_io": 0, 00:11:14.017 "transports": [ 00:11:14.017 { 00:11:14.017 "trtype": "TCP" 00:11:14.017 } 00:11:14.017 ] 00:11:14.017 } 00:11:14.017 ], 00:11:14.017 "tick_rate": 2200000000 00:11:14.017 }' 00:11:14.017 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:14.018 rmmod nvme_tcp 00:11:14.018 rmmod nvme_fabrics 00:11:14.018 rmmod nvme_keyring 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 74092 ']' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 74092 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 74092 ']' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 74092 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74092 00:11:14.018 killing process with pid 74092 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74092' 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 74092 00:11:14.018 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 74092 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@254 -- # local dev 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:11:14.277 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # continue 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # continue 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@274 -- # iptr 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-save 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-restore 00:11:14.536 ************************************ 00:11:14.536 END TEST nvmf_rpc 00:11:14.536 ************************************ 00:11:14.536 00:11:14.536 real 0m18.997s 00:11:14.536 user 1m10.210s 00:11:14.536 sys 0m2.794s 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:14.536 ************************************ 00:11:14.536 START TEST nvmf_invalid 00:11:14.536 ************************************ 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:14.536 * Looking for test storage... 00:11:14.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:14.536 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:14.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.796 --rc genhtml_branch_coverage=1 00:11:14.796 --rc genhtml_function_coverage=1 00:11:14.796 --rc genhtml_legend=1 00:11:14.796 --rc geninfo_all_blocks=1 00:11:14.796 --rc geninfo_unexecuted_blocks=1 00:11:14.796 00:11:14.796 ' 00:11:14.796 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:14.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.796 --rc genhtml_branch_coverage=1 00:11:14.796 --rc genhtml_function_coverage=1 00:11:14.796 --rc genhtml_legend=1 00:11:14.797 --rc geninfo_all_blocks=1 00:11:14.797 --rc geninfo_unexecuted_blocks=1 00:11:14.797 00:11:14.797 ' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:14.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.797 --rc genhtml_branch_coverage=1 00:11:14.797 --rc genhtml_function_coverage=1 00:11:14.797 --rc genhtml_legend=1 00:11:14.797 --rc geninfo_all_blocks=1 00:11:14.797 --rc geninfo_unexecuted_blocks=1 00:11:14.797 00:11:14.797 ' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:14.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.797 --rc genhtml_branch_coverage=1 00:11:14.797 --rc genhtml_function_coverage=1 00:11:14.797 --rc genhtml_legend=1 00:11:14.797 --rc geninfo_all_blocks=1 00:11:14.797 --rc geninfo_unexecuted_blocks=1 00:11:14.797 00:11:14.797 ' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:14.797 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@280 -- # nvmf_veth_init 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@223 -- # create_target_ns 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # create_main_bridge 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@105 -- # delete_main_bridge 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # return 0 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:11:14.797 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up initiator0 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up target0 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target0 up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up target0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns target0 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:11:14.798 10.0.0.1 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:11:14.798 10.0.0.2 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up initiator0 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:11:14.798 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up target0_br 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:11:15.059 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up initiator1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@151 -- # set_up target1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target1 up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # set_up target1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns target1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772163 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:11:15.060 10.0.0.3 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772164 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:11:15.060 10.0.0.4 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up initiator1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@129 -- # set_up target1_br 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 2 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:11:15.060 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:15.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:11:15.061 00:11:15.061 --- 10.0.0.1 ping statistics --- 00:11:15.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.061 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:15.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:11:15.061 00:11:15.061 --- 10.0.0.2 ping statistics --- 00:11:15.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.061 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:11:15.061 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.061 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:11:15.061 00:11:15.061 --- 10.0.0.3 ping statistics --- 00:11:15.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.061 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:11:15.061 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:15.061 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:11:15.061 00:11:15.061 --- 10.0.0.4 ping statistics --- 00:11:15.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.061 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # return 0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:11:15.061 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo initiator1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target0 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:15.062 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo target1 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=target1 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:15.321 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:15.321 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:15.321 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:15.321 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.321 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:15.321 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=74649 00:11:15.321 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.321 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 74649 00:11:15.322 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74649 ']' 00:11:15.322 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.322 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.322 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.322 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.322 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:15.322 [2024-11-20 09:04:54.090128] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:11:15.322 [2024-11-20 09:04:54.090263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.580 [2024-11-20 09:04:54.243455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.580 [2024-11-20 09:04:54.315630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.580 [2024-11-20 09:04:54.315993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.580 [2024-11-20 09:04:54.316252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.580 [2024-11-20 09:04:54.316426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.580 [2024-11-20 09:04:54.316569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.580 [2024-11-20 09:04:54.317926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.580 [2024-11-20 09:04:54.318031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.580 [2024-11-20 09:04:54.318097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.580 [2024-11-20 09:04:54.318100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.581 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.581 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:15.581 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:15.581 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:15.581 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:15.839 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.839 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:15.839 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19539 00:11:16.098 [2024-11-20 09:04:54.795207] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:16.098 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/20 09:04:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19539 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:16.098 request: 00:11:16.098 { 00:11:16.098 "method": "nvmf_create_subsystem", 00:11:16.098 "params": { 00:11:16.098 "nqn": "nqn.2016-06.io.spdk:cnode19539", 00:11:16.098 "tgt_name": "foobar" 00:11:16.098 } 00:11:16.098 } 00:11:16.098 Got JSON-RPC error response 00:11:16.098 GoRPCClient: error on JSON-RPC call' 00:11:16.098 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/20 09:04:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19539 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:16.098 request: 00:11:16.098 { 00:11:16.098 "method": "nvmf_create_subsystem", 00:11:16.098 "params": { 00:11:16.098 "nqn": "nqn.2016-06.io.spdk:cnode19539", 00:11:16.098 "tgt_name": "foobar" 00:11:16.098 } 00:11:16.098 } 00:11:16.098 Got JSON-RPC error response 00:11:16.098 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:16.098 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:16.098 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8494 00:11:16.356 [2024-11-20 09:04:55.135584] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8494: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:16.356 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/20 09:04:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8494 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:16.356 request: 00:11:16.356 { 00:11:16.356 "method": "nvmf_create_subsystem", 00:11:16.356 "params": { 00:11:16.356 "nqn": "nqn.2016-06.io.spdk:cnode8494", 00:11:16.356 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:16.356 } 00:11:16.356 } 00:11:16.356 Got JSON-RPC error response 00:11:16.356 GoRPCClient: error on JSON-RPC call' 00:11:16.356 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/20 09:04:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode8494 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:16.356 request: 00:11:16.356 { 00:11:16.356 "method": "nvmf_create_subsystem", 00:11:16.356 "params": { 00:11:16.356 "nqn": "nqn.2016-06.io.spdk:cnode8494", 00:11:16.356 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:16.356 } 00:11:16.356 } 00:11:16.356 Got JSON-RPC error response 00:11:16.356 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:16.356 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:16.356 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11593 00:11:16.615 [2024-11-20 09:04:55.431928] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11593: invalid model number 'SPDK_Controller' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/20 09:04:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode11593], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:16.615 request: 00:11:16.615 { 00:11:16.615 "method": "nvmf_create_subsystem", 00:11:16.615 "params": { 00:11:16.615 "nqn": "nqn.2016-06.io.spdk:cnode11593", 00:11:16.615 "model_number": "SPDK_Controller\u001f" 00:11:16.615 } 00:11:16.615 } 00:11:16.615 Got JSON-RPC error response 00:11:16.615 GoRPCClient: error on JSON-RPC call' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/20 09:04:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode11593], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:16.615 request: 00:11:16.615 { 00:11:16.615 "method": "nvmf_create_subsystem", 00:11:16.615 "params": { 00:11:16.615 "nqn": "nqn.2016-06.io.spdk:cnode11593", 00:11:16.615 "model_number": "SPDK_Controller\u001f" 00:11:16.615 } 00:11:16.615 } 00:11:16.615 Got JSON-RPC error response 00:11:16.615 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.615 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '5u$6pIQPs&&'\''K$t,oew]' 00:11:16.874 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '5u$6pIQPs&&'\''K$t,oew]' nqn.2016-06.io.spdk:cnode13580 00:11:17.133 [2024-11-20 09:04:55.876279] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13580: invalid serial number '5u$6pIQPs&&'K$t,oew]' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/20 09:04:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13580 serial_number:5u$6pIQPs&&'\''K$t,oew]], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 5u$6pIQPs&&'\''K$t,oew] 00:11:17.133 request: 00:11:17.133 { 00:11:17.133 "method": "nvmf_create_subsystem", 00:11:17.133 "params": { 00:11:17.133 "nqn": "nqn.2016-06.io.spdk:cnode13580", 00:11:17.133 "serial_number": "5u$\u007f6pIQPs&&'\''K$t,oew]" 00:11:17.133 } 00:11:17.133 } 00:11:17.133 Got JSON-RPC error response 00:11:17.133 GoRPCClient: error on JSON-RPC call' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/20 09:04:55 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13580 serial_number:5u$6pIQPs&&'K$t,oew]], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 5u$6pIQPs&&'K$t,oew] 00:11:17.133 request: 00:11:17.133 { 00:11:17.133 "method": "nvmf_create_subsystem", 00:11:17.133 "params": { 00:11:17.133 "nqn": "nqn.2016-06.io.spdk:cnode13580", 00:11:17.133 "serial_number": "5u$\u007f6pIQPs&&'K$t,oew]" 00:11:17.133 } 00:11:17.133 } 00:11:17.133 Got JSON-RPC error response 00:11:17.133 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.133 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.134 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.393 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'\''O/xb;R"xi' 00:11:17.394 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'\''O/xb;R"xi' nqn.2016-06.io.spdk:cnode13650 00:11:17.652 [2024-11-20 09:04:56.412751] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13650: invalid model number 'z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'O/xb;R"xi' 00:11:17.652 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/20 09:04:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'\''O/xb;R"xi nqn:nqn.2016-06.io.spdk:cnode13650], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'\''O/xb;R"xi 00:11:17.652 request: 00:11:17.652 { 00:11:17.652 "method": "nvmf_create_subsystem", 00:11:17.652 "params": { 00:11:17.652 "nqn": "nqn.2016-06.io.spdk:cnode13650", 00:11:17.652 "model_number": "z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'\''O/xb;\u007fR\"xi" 00:11:17.652 } 00:11:17.652 } 00:11:17.652 Got JSON-RPC error response 00:11:17.652 GoRPCClient: error on JSON-RPC call' 00:11:17.652 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/20 09:04:56 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'O/xb;R"xi nqn:nqn.2016-06.io.spdk:cnode13650], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'O/xb;R"xi 00:11:17.652 request: 00:11:17.652 { 00:11:17.652 "method": "nvmf_create_subsystem", 00:11:17.652 "params": { 00:11:17.652 "nqn": "nqn.2016-06.io.spdk:cnode13650", 00:11:17.652 "model_number": "z @rDht,;Q8GWQDC&N;KjamU|!9Ei`'O/xb;\u007fR\"xi" 00:11:17.652 } 00:11:17.652 } 00:11:17.652 Got JSON-RPC error response 00:11:17.652 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:17.653 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:17.911 [2024-11-20 09:04:56.733101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.911 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:18.170 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a 10.0.0.1 -s 4421 00:11:18.429 [2024-11-20 09:04:57.305634] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:18.429 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # out='2024/11/20 09:04:57 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr:10.0.0.1 trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:18.429 request: 00:11:18.429 { 00:11:18.429 "method": "nvmf_subsystem_remove_listener", 00:11:18.429 "params": { 00:11:18.429 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:18.429 "listen_address": { 00:11:18.429 "trtype": "tcp", 00:11:18.429 "traddr": "10.0.0.1", 00:11:18.429 "trsvcid": "4421" 00:11:18.429 } 00:11:18.429 } 00:11:18.429 } 00:11:18.429 Got JSON-RPC error response 00:11:18.429 GoRPCClient: error on JSON-RPC call' 00:11:18.429 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@65 -- # [[ 2024/11/20 09:04:57 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr:10.0.0.1 trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:18.429 request: 00:11:18.429 { 00:11:18.429 "method": "nvmf_subsystem_remove_listener", 00:11:18.429 "params": { 00:11:18.429 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:18.429 "listen_address": { 00:11:18.429 "trtype": "tcp", 00:11:18.429 "traddr": "10.0.0.1", 00:11:18.429 "trsvcid": "4421" 00:11:18.429 } 00:11:18.429 } 00:11:18.429 } 00:11:18.429 Got JSON-RPC error response 00:11:18.429 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:18.430 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5067 -i 0 00:11:18.688 [2024-11-20 09:04:57.574845] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5067: invalid cntlid range [0-65519] 00:11:18.688 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # out='2024/11/20 09:04:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5067], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:18.688 request: 00:11:18.688 { 00:11:18.688 "method": "nvmf_create_subsystem", 00:11:18.688 "params": { 00:11:18.688 "nqn": "nqn.2016-06.io.spdk:cnode5067", 00:11:18.688 "min_cntlid": 0 00:11:18.688 } 00:11:18.688 } 00:11:18.688 Got JSON-RPC error response 00:11:18.688 GoRPCClient: error on JSON-RPC call' 00:11:18.688 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # [[ 2024/11/20 09:04:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5067], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:18.688 request: 00:11:18.688 { 00:11:18.688 "method": "nvmf_create_subsystem", 00:11:18.688 "params": { 00:11:18.688 "nqn": "nqn.2016-06.io.spdk:cnode5067", 00:11:18.688 "min_cntlid": 0 00:11:18.688 } 00:11:18.688 } 00:11:18.688 Got JSON-RPC error response 00:11:18.688 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:18.688 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13168 -i 65520 00:11:18.947 [2024-11-20 09:04:57.835083] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13168: invalid cntlid range [65520-65519] 00:11:18.947 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # out='2024/11/20 09:04:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13168], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:18.947 request: 00:11:18.947 { 00:11:18.947 "method": "nvmf_create_subsystem", 00:11:18.947 "params": { 00:11:18.947 "nqn": "nqn.2016-06.io.spdk:cnode13168", 00:11:18.947 "min_cntlid": 65520 00:11:18.947 } 00:11:18.947 } 00:11:18.947 Got JSON-RPC error response 00:11:18.947 GoRPCClient: error on JSON-RPC call' 00:11:18.947 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@71 -- # [[ 2024/11/20 09:04:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode13168], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:18.947 request: 00:11:18.947 { 00:11:18.947 "method": "nvmf_create_subsystem", 00:11:18.947 "params": { 00:11:18.947 "nqn": "nqn.2016-06.io.spdk:cnode13168", 00:11:18.947 "min_cntlid": 65520 00:11:18.947 } 00:11:18.947 } 00:11:18.947 Got JSON-RPC error response 00:11:18.947 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:18.947 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30831 -I 0 00:11:19.515 [2024-11-20 09:04:58.155375] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30831: invalid cntlid range [1-0] 00:11:19.516 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # out='2024/11/20 09:04:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30831], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:19.516 request: 00:11:19.516 { 00:11:19.516 "method": "nvmf_create_subsystem", 00:11:19.516 "params": { 00:11:19.516 "nqn": "nqn.2016-06.io.spdk:cnode30831", 00:11:19.516 "max_cntlid": 0 00:11:19.516 } 00:11:19.516 } 00:11:19.516 Got JSON-RPC error response 00:11:19.516 GoRPCClient: error on JSON-RPC call' 00:11:19.516 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # [[ 2024/11/20 09:04:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode30831], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:19.516 request: 00:11:19.516 { 00:11:19.516 "method": "nvmf_create_subsystem", 00:11:19.516 "params": { 00:11:19.516 "nqn": "nqn.2016-06.io.spdk:cnode30831", 00:11:19.516 "max_cntlid": 0 00:11:19.516 } 00:11:19.516 } 00:11:19.516 Got JSON-RPC error response 00:11:19.516 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:19.516 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11556 -I 65520 00:11:19.516 [2024-11-20 09:04:58.427600] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11556: invalid cntlid range [1-65520] 00:11:19.775 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # out='2024/11/20 09:04:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11556], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:19.775 request: 00:11:19.775 { 00:11:19.775 "method": "nvmf_create_subsystem", 00:11:19.775 "params": { 00:11:19.775 "nqn": "nqn.2016-06.io.spdk:cnode11556", 00:11:19.775 "max_cntlid": 65520 00:11:19.775 } 00:11:19.775 } 00:11:19.775 Got JSON-RPC error response 00:11:19.775 GoRPCClient: error on JSON-RPC call' 00:11:19.775 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # [[ 2024/11/20 09:04:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11556], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:19.775 request: 00:11:19.775 { 00:11:19.775 "method": "nvmf_create_subsystem", 00:11:19.775 "params": { 00:11:19.775 "nqn": "nqn.2016-06.io.spdk:cnode11556", 00:11:19.775 "max_cntlid": 65520 00:11:19.775 } 00:11:19.775 } 00:11:19.775 Got JSON-RPC error response 00:11:19.775 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:19.775 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1240 -i 6 -I 5 00:11:20.034 [2024-11-20 09:04:58.731892] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1240: invalid cntlid range [6-5] 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # out='2024/11/20 09:04:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode1240], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:20.034 request: 00:11:20.034 { 00:11:20.034 "method": "nvmf_create_subsystem", 00:11:20.034 "params": { 00:11:20.034 "nqn": "nqn.2016-06.io.spdk:cnode1240", 00:11:20.034 "min_cntlid": 6, 00:11:20.034 "max_cntlid": 5 00:11:20.034 } 00:11:20.034 } 00:11:20.034 Got JSON-RPC error response 00:11:20.034 GoRPCClient: error on JSON-RPC call' 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # [[ 2024/11/20 09:04:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode1240], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:20.034 request: 00:11:20.034 { 00:11:20.034 "method": "nvmf_create_subsystem", 00:11:20.034 "params": { 00:11:20.034 "nqn": "nqn.2016-06.io.spdk:cnode1240", 00:11:20.034 "min_cntlid": 6, 00:11:20.034 "max_cntlid": 5 00:11:20.034 } 00:11:20.034 } 00:11:20.034 Got JSON-RPC error response 00:11:20.034 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # out='request: 00:11:20.034 { 00:11:20.034 "name": "foobar", 00:11:20.034 "method": "nvmf_delete_target", 00:11:20.034 "req_id": 1 00:11:20.034 } 00:11:20.034 Got JSON-RPC error response 00:11:20.034 response: 00:11:20.034 { 00:11:20.034 "code": -32602, 00:11:20.034 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:20.034 }' 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # [[ request: 00:11:20.034 { 00:11:20.034 "name": "foobar", 00:11:20.034 "method": "nvmf_delete_target", 00:11:20.034 "req_id": 1 00:11:20.034 } 00:11:20.034 Got JSON-RPC error response 00:11:20.034 response: 00:11:20.034 { 00:11:20.034 "code": -32602, 00:11:20.034 "message": "The specified target doesn't exist, cannot delete it." 00:11:20.034 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@86 -- # nvmftestfini 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@99 -- # sync 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # set +e 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:20.034 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:20.034 rmmod nvme_tcp 00:11:20.034 rmmod nvme_fabrics 00:11:20.034 rmmod nvme_keyring 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # set -e 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # return 0 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # '[' -n 74649 ']' 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # killprocess 74649 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74649 ']' 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74649 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74649 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.293 killing process with pid 74649 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74649' 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74649 00:11:20.293 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74649 00:11:20.293 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:20.293 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # nvmf_fini 00:11:20.293 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@254 -- # local dev 00:11:20.293 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:20.293 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:20.293 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:20.293 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # continue 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # continue 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@274 -- # iptr 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-save 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-restore 00:11:20.552 00:11:20.552 real 0m6.074s 00:11:20.552 user 0m23.319s 00:11:20.552 sys 0m1.456s 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:20.552 ************************************ 00:11:20.552 END TEST nvmf_invalid 00:11:20.552 ************************************ 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.552 ************************************ 00:11:20.552 START TEST nvmf_connect_stress 00:11:20.552 ************************************ 00:11:20.552 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:20.811 * Looking for test storage... 00:11:20.811 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:20.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.811 --rc genhtml_branch_coverage=1 00:11:20.811 --rc genhtml_function_coverage=1 00:11:20.811 --rc genhtml_legend=1 00:11:20.811 --rc geninfo_all_blocks=1 00:11:20.811 --rc geninfo_unexecuted_blocks=1 00:11:20.811 00:11:20.811 ' 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:20.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.811 --rc genhtml_branch_coverage=1 00:11:20.811 --rc genhtml_function_coverage=1 00:11:20.811 --rc genhtml_legend=1 00:11:20.811 --rc geninfo_all_blocks=1 00:11:20.811 --rc geninfo_unexecuted_blocks=1 00:11:20.811 00:11:20.811 ' 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:20.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.811 --rc genhtml_branch_coverage=1 00:11:20.811 --rc genhtml_function_coverage=1 00:11:20.811 --rc genhtml_legend=1 00:11:20.811 --rc geninfo_all_blocks=1 00:11:20.811 --rc geninfo_unexecuted_blocks=1 00:11:20.811 00:11:20.811 ' 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:20.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.811 --rc genhtml_branch_coverage=1 00:11:20.811 --rc genhtml_function_coverage=1 00:11:20.811 --rc genhtml_legend=1 00:11:20.811 --rc geninfo_all_blocks=1 00:11:20.811 --rc geninfo_unexecuted_blocks=1 00:11:20.811 00:11:20.811 ' 00:11:20.811 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:20.812 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@280 -- # nvmf_veth_init 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@223 -- # create_target_ns 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # create_main_bridge 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@105 -- # delete_main_bridge 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # return 0 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:11:20.812 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up initiator0 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up target0 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:11:20.813 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target0 up 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up target0_br 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns target0 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:11:21.072 10.0.0.1 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:11:21.072 10.0.0.2 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up initiator0 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.072 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up target0_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up initiator1 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@151 -- # set_up target1 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target1 up 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # set_up target1_br 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns target1 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772163 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:11:21.073 10.0.0.3 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772164 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:11:21.073 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:11:21.074 10.0.0.4 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up initiator1 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@129 -- # set_up target1_br 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 2 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:21.074 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:21.332 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:21.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:11:21.332 00:11:21.332 --- 10.0.0.1 ping statistics --- 00:11:21.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.332 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target0 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target0 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:21.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:11:21.332 00:11:21.332 --- 10.0.0.2 ping statistics --- 00:11:21.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.332 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:21.332 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:11:21.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:11:21.333 00:11:21.333 --- 10.0.0.3 ping statistics --- 00:11:21.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.333 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:11:21.333 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:21.333 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:11:21.333 00:11:21.333 --- 10.0.0.4 ping statistics --- 00:11:21.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.333 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # return 0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target0 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=target1 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=75196 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 75196 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 75196 ']' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.333 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.333 [2024-11-20 09:05:00.220368] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:11:21.333 [2024-11-20 09:05:00.220473] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.591 [2024-11-20 09:05:00.374288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.591 [2024-11-20 09:05:00.448129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.591 [2024-11-20 09:05:00.448202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.591 [2024-11-20 09:05:00.448217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.591 [2024-11-20 09:05:00.448227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.591 [2024-11-20 09:05:00.448237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.591 [2024-11-20 09:05:00.449561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.591 [2024-11-20 09:05:00.449750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.591 [2024-11-20 09:05:00.449778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 [2024-11-20 09:05:00.638892] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 [2024-11-20 09:05:00.663074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.850 NULL1 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75233 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.850 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.415 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.415 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:22.415 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.415 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.415 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.674 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.674 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:22.674 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.674 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.674 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.932 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.932 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:22.932 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.932 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.932 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.189 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.189 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:23.189 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.189 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.189 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.447 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.447 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:23.447 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.447 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.447 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.018 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.018 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:24.018 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.018 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.018 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.283 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.283 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:24.283 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.283 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.283 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.541 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.541 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:24.541 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.541 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.541 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.800 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.800 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:24.800 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.800 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.800 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.058 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.058 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:25.058 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.058 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.058 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.623 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.623 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:25.623 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.623 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.623 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.882 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.882 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:25.882 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.882 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.882 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.140 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.140 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:26.140 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.140 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.140 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.398 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.398 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:26.398 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.398 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.398 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.963 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.963 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:26.963 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.963 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.963 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.222 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.222 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:27.222 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.222 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.222 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.480 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.480 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:27.480 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.480 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.480 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.739 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.739 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:27.739 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.739 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.739 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.998 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.998 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:27.998 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.998 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.998 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.566 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.566 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:28.566 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.566 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.566 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.825 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.825 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:28.825 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.825 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.825 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.085 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.085 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:29.085 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.085 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.085 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.344 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.344 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:29.344 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.344 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.344 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.601 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.601 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:29.601 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.601 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.601 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.166 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.167 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:30.167 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.167 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.167 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.437 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.437 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:30.437 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.437 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.437 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.697 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.697 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:30.697 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.697 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.697 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.955 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.955 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:30.955 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.955 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.955 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.212 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.212 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:31.212 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.212 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.212 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.778 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.778 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:31.778 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.778 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.778 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.035 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.035 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:32.036 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.036 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.036 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.036 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.293 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.293 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75233 00:11:32.294 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75233) - No such process 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75233 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:32.294 rmmod nvme_tcp 00:11:32.294 rmmod nvme_fabrics 00:11:32.294 rmmod nvme_keyring 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 75196 ']' 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 75196 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 75196 ']' 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 75196 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75196 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:32.294 killing process with pid 75196 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75196' 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 75196 00:11:32.294 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 75196 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@254 -- # local dev 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:11:32.552 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # continue 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # continue 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@274 -- # iptr 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-save 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-restore 00:11:32.811 00:11:32.811 real 0m12.158s 00:11:32.811 user 0m39.341s 00:11:32.811 sys 0m3.565s 00:11:32.811 ************************************ 00:11:32.811 END TEST nvmf_connect_stress 00:11:32.811 ************************************ 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.811 ************************************ 00:11:32.811 START TEST nvmf_fused_ordering 00:11:32.811 ************************************ 00:11:32.811 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:32.811 * Looking for test storage... 00:11:33.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.072 --rc genhtml_branch_coverage=1 00:11:33.072 --rc genhtml_function_coverage=1 00:11:33.072 --rc genhtml_legend=1 00:11:33.072 --rc geninfo_all_blocks=1 00:11:33.072 --rc geninfo_unexecuted_blocks=1 00:11:33.072 00:11:33.072 ' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.072 --rc genhtml_branch_coverage=1 00:11:33.072 --rc genhtml_function_coverage=1 00:11:33.072 --rc genhtml_legend=1 00:11:33.072 --rc geninfo_all_blocks=1 00:11:33.072 --rc geninfo_unexecuted_blocks=1 00:11:33.072 00:11:33.072 ' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.072 --rc genhtml_branch_coverage=1 00:11:33.072 --rc genhtml_function_coverage=1 00:11:33.072 --rc genhtml_legend=1 00:11:33.072 --rc geninfo_all_blocks=1 00:11:33.072 --rc geninfo_unexecuted_blocks=1 00:11:33.072 00:11:33.072 ' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.072 --rc genhtml_branch_coverage=1 00:11:33.072 --rc genhtml_function_coverage=1 00:11:33.072 --rc genhtml_legend=1 00:11:33.072 --rc geninfo_all_blocks=1 00:11:33.072 --rc geninfo_unexecuted_blocks=1 00:11:33.072 00:11:33.072 ' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.072 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:33.073 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@280 -- # nvmf_veth_init 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@223 -- # create_target_ns 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # create_main_bridge 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@105 -- # delete_main_bridge 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # return 0 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up initiator0 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up target0 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target0 up 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up target0_br 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:33.073 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns target0 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:11:33.074 10.0.0.1 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:11:33.074 10.0.0.2 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up initiator0 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:11:33.074 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:11:33.333 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up target0_br 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up initiator1 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:33.333 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@151 -- # set_up target1 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target1 up 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # set_up target1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns target1 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772163 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:11:33.334 10.0.0.3 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772164 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:11:33.334 10.0.0.4 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up initiator1 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@129 -- # set_up target1_br 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 2 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:33.334 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:33.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:11:33.335 00:11:33.335 --- 10.0.0.1 ping statistics --- 00:11:33.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.335 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target0 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:33.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:11:33.335 00:11:33.335 --- 10.0.0.2 ping statistics --- 00:11:33.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.335 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:11:33.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:33.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:11:33.335 00:11:33.335 --- 10.0.0.3 ping statistics --- 00:11:33.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.335 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target1 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:33.335 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:11:33.336 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:33.336 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.120 ms 00:11:33.336 00:11:33.336 --- 10.0.0.4 ping statistics --- 00:11:33.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.336 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.336 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # return 0 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator0 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo initiator1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.594 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target0 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target0 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo target1 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=target1 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=75615 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 75615 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75615 ']' 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:33.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:33.595 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:33.595 [2024-11-20 09:05:12.394388] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:11:33.595 [2024-11-20 09:05:12.394511] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.854 [2024-11-20 09:05:12.554826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.854 [2024-11-20 09:05:12.618518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.854 [2024-11-20 09:05:12.618588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.854 [2024-11-20 09:05:12.618602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.854 [2024-11-20 09:05:12.618613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.854 [2024-11-20 09:05:12.618622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.854 [2024-11-20 09:05:12.619108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.854 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.854 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:33.854 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:33.854 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:33.854 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.112 [2024-11-20 09:05:12.800500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.112 [2024-11-20 09:05:12.820646] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.112 NULL1 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.112 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:34.112 [2024-11-20 09:05:12.877094] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:11:34.112 [2024-11-20 09:05:12.877152] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75650 ] 00:11:34.678 Attached to nqn.2016-06.io.spdk:cnode1 00:11:34.678 Namespace ID: 1 size: 1GB 00:11:34.678 fused_ordering(0) 00:11:34.678 fused_ordering(1) 00:11:34.678 fused_ordering(2) 00:11:34.678 fused_ordering(3) 00:11:34.678 fused_ordering(4) 00:11:34.678 fused_ordering(5) 00:11:34.678 fused_ordering(6) 00:11:34.678 fused_ordering(7) 00:11:34.678 fused_ordering(8) 00:11:34.678 fused_ordering(9) 00:11:34.678 fused_ordering(10) 00:11:34.678 fused_ordering(11) 00:11:34.678 fused_ordering(12) 00:11:34.678 fused_ordering(13) 00:11:34.678 fused_ordering(14) 00:11:34.678 fused_ordering(15) 00:11:34.678 fused_ordering(16) 00:11:34.678 fused_ordering(17) 00:11:34.678 fused_ordering(18) 00:11:34.678 fused_ordering(19) 00:11:34.678 fused_ordering(20) 00:11:34.678 fused_ordering(21) 00:11:34.678 fused_ordering(22) 00:11:34.678 fused_ordering(23) 00:11:34.678 fused_ordering(24) 00:11:34.678 fused_ordering(25) 00:11:34.678 fused_ordering(26) 00:11:34.678 fused_ordering(27) 00:11:34.678 fused_ordering(28) 00:11:34.678 fused_ordering(29) 00:11:34.678 fused_ordering(30) 00:11:34.678 fused_ordering(31) 00:11:34.678 fused_ordering(32) 00:11:34.678 fused_ordering(33) 00:11:34.678 fused_ordering(34) 00:11:34.678 fused_ordering(35) 00:11:34.678 fused_ordering(36) 00:11:34.678 fused_ordering(37) 00:11:34.678 fused_ordering(38) 00:11:34.678 fused_ordering(39) 00:11:34.678 fused_ordering(40) 00:11:34.678 fused_ordering(41) 00:11:34.678 fused_ordering(42) 00:11:34.678 fused_ordering(43) 00:11:34.678 fused_ordering(44) 00:11:34.678 fused_ordering(45) 00:11:34.678 fused_ordering(46) 00:11:34.678 fused_ordering(47) 00:11:34.678 fused_ordering(48) 00:11:34.678 fused_ordering(49) 00:11:34.678 fused_ordering(50) 00:11:34.678 fused_ordering(51) 00:11:34.679 fused_ordering(52) 00:11:34.679 fused_ordering(53) 00:11:34.679 fused_ordering(54) 00:11:34.679 fused_ordering(55) 00:11:34.679 fused_ordering(56) 00:11:34.679 fused_ordering(57) 00:11:34.679 fused_ordering(58) 00:11:34.679 fused_ordering(59) 00:11:34.679 fused_ordering(60) 00:11:34.679 fused_ordering(61) 00:11:34.679 fused_ordering(62) 00:11:34.679 fused_ordering(63) 00:11:34.679 fused_ordering(64) 00:11:34.679 fused_ordering(65) 00:11:34.679 fused_ordering(66) 00:11:34.679 fused_ordering(67) 00:11:34.679 fused_ordering(68) 00:11:34.679 fused_ordering(69) 00:11:34.679 fused_ordering(70) 00:11:34.679 fused_ordering(71) 00:11:34.679 fused_ordering(72) 00:11:34.679 fused_ordering(73) 00:11:34.679 fused_ordering(74) 00:11:34.679 fused_ordering(75) 00:11:34.679 fused_ordering(76) 00:11:34.679 fused_ordering(77) 00:11:34.679 fused_ordering(78) 00:11:34.679 fused_ordering(79) 00:11:34.679 fused_ordering(80) 00:11:34.679 fused_ordering(81) 00:11:34.679 fused_ordering(82) 00:11:34.679 fused_ordering(83) 00:11:34.679 fused_ordering(84) 00:11:34.679 fused_ordering(85) 00:11:34.679 fused_ordering(86) 00:11:34.679 fused_ordering(87) 00:11:34.679 fused_ordering(88) 00:11:34.679 fused_ordering(89) 00:11:34.679 fused_ordering(90) 00:11:34.679 fused_ordering(91) 00:11:34.679 fused_ordering(92) 00:11:34.679 fused_ordering(93) 00:11:34.679 fused_ordering(94) 00:11:34.679 fused_ordering(95) 00:11:34.679 fused_ordering(96) 00:11:34.679 fused_ordering(97) 00:11:34.679 fused_ordering(98) 00:11:34.679 fused_ordering(99) 00:11:34.679 fused_ordering(100) 00:11:34.679 fused_ordering(101) 00:11:34.679 fused_ordering(102) 00:11:34.679 fused_ordering(103) 00:11:34.679 fused_ordering(104) 00:11:34.679 fused_ordering(105) 00:11:34.679 fused_ordering(106) 00:11:34.679 fused_ordering(107) 00:11:34.679 fused_ordering(108) 00:11:34.679 fused_ordering(109) 00:11:34.679 fused_ordering(110) 00:11:34.679 fused_ordering(111) 00:11:34.679 fused_ordering(112) 00:11:34.679 fused_ordering(113) 00:11:34.679 fused_ordering(114) 00:11:34.679 fused_ordering(115) 00:11:34.679 fused_ordering(116) 00:11:34.679 fused_ordering(117) 00:11:34.679 fused_ordering(118) 00:11:34.679 fused_ordering(119) 00:11:34.679 fused_ordering(120) 00:11:34.679 fused_ordering(121) 00:11:34.679 fused_ordering(122) 00:11:34.679 fused_ordering(123) 00:11:34.679 fused_ordering(124) 00:11:34.679 fused_ordering(125) 00:11:34.679 fused_ordering(126) 00:11:34.679 fused_ordering(127) 00:11:34.679 fused_ordering(128) 00:11:34.679 fused_ordering(129) 00:11:34.679 fused_ordering(130) 00:11:34.679 fused_ordering(131) 00:11:34.679 fused_ordering(132) 00:11:34.679 fused_ordering(133) 00:11:34.679 fused_ordering(134) 00:11:34.679 fused_ordering(135) 00:11:34.679 fused_ordering(136) 00:11:34.679 fused_ordering(137) 00:11:34.679 fused_ordering(138) 00:11:34.679 fused_ordering(139) 00:11:34.679 fused_ordering(140) 00:11:34.679 fused_ordering(141) 00:11:34.679 fused_ordering(142) 00:11:34.679 fused_ordering(143) 00:11:34.679 fused_ordering(144) 00:11:34.679 fused_ordering(145) 00:11:34.679 fused_ordering(146) 00:11:34.679 fused_ordering(147) 00:11:34.679 fused_ordering(148) 00:11:34.679 fused_ordering(149) 00:11:34.679 fused_ordering(150) 00:11:34.679 fused_ordering(151) 00:11:34.679 fused_ordering(152) 00:11:34.679 fused_ordering(153) 00:11:34.679 fused_ordering(154) 00:11:34.679 fused_ordering(155) 00:11:34.679 fused_ordering(156) 00:11:34.679 fused_ordering(157) 00:11:34.679 fused_ordering(158) 00:11:34.679 fused_ordering(159) 00:11:34.679 fused_ordering(160) 00:11:34.679 fused_ordering(161) 00:11:34.679 fused_ordering(162) 00:11:34.679 fused_ordering(163) 00:11:34.679 fused_ordering(164) 00:11:34.679 fused_ordering(165) 00:11:34.679 fused_ordering(166) 00:11:34.679 fused_ordering(167) 00:11:34.679 fused_ordering(168) 00:11:34.679 fused_ordering(169) 00:11:34.679 fused_ordering(170) 00:11:34.679 fused_ordering(171) 00:11:34.679 fused_ordering(172) 00:11:34.679 fused_ordering(173) 00:11:34.679 fused_ordering(174) 00:11:34.679 fused_ordering(175) 00:11:34.679 fused_ordering(176) 00:11:34.679 fused_ordering(177) 00:11:34.679 fused_ordering(178) 00:11:34.679 fused_ordering(179) 00:11:34.679 fused_ordering(180) 00:11:34.679 fused_ordering(181) 00:11:34.679 fused_ordering(182) 00:11:34.679 fused_ordering(183) 00:11:34.679 fused_ordering(184) 00:11:34.679 fused_ordering(185) 00:11:34.679 fused_ordering(186) 00:11:34.679 fused_ordering(187) 00:11:34.679 fused_ordering(188) 00:11:34.679 fused_ordering(189) 00:11:34.679 fused_ordering(190) 00:11:34.679 fused_ordering(191) 00:11:34.679 fused_ordering(192) 00:11:34.679 fused_ordering(193) 00:11:34.679 fused_ordering(194) 00:11:34.679 fused_ordering(195) 00:11:34.679 fused_ordering(196) 00:11:34.679 fused_ordering(197) 00:11:34.679 fused_ordering(198) 00:11:34.679 fused_ordering(199) 00:11:34.679 fused_ordering(200) 00:11:34.679 fused_ordering(201) 00:11:34.679 fused_ordering(202) 00:11:34.679 fused_ordering(203) 00:11:34.679 fused_ordering(204) 00:11:34.679 fused_ordering(205) 00:11:34.938 fused_ordering(206) 00:11:34.938 fused_ordering(207) 00:11:34.938 fused_ordering(208) 00:11:34.938 fused_ordering(209) 00:11:34.938 fused_ordering(210) 00:11:34.938 fused_ordering(211) 00:11:34.938 fused_ordering(212) 00:11:34.938 fused_ordering(213) 00:11:34.938 fused_ordering(214) 00:11:34.938 fused_ordering(215) 00:11:34.938 fused_ordering(216) 00:11:34.938 fused_ordering(217) 00:11:34.938 fused_ordering(218) 00:11:34.938 fused_ordering(219) 00:11:34.938 fused_ordering(220) 00:11:34.938 fused_ordering(221) 00:11:34.938 fused_ordering(222) 00:11:34.938 fused_ordering(223) 00:11:34.938 fused_ordering(224) 00:11:34.938 fused_ordering(225) 00:11:34.938 fused_ordering(226) 00:11:34.938 fused_ordering(227) 00:11:34.938 fused_ordering(228) 00:11:34.938 fused_ordering(229) 00:11:34.938 fused_ordering(230) 00:11:34.938 fused_ordering(231) 00:11:34.938 fused_ordering(232) 00:11:34.938 fused_ordering(233) 00:11:34.938 fused_ordering(234) 00:11:34.938 fused_ordering(235) 00:11:34.938 fused_ordering(236) 00:11:34.938 fused_ordering(237) 00:11:34.938 fused_ordering(238) 00:11:34.938 fused_ordering(239) 00:11:34.938 fused_ordering(240) 00:11:34.938 fused_ordering(241) 00:11:34.938 fused_ordering(242) 00:11:34.938 fused_ordering(243) 00:11:34.938 fused_ordering(244) 00:11:34.938 fused_ordering(245) 00:11:34.938 fused_ordering(246) 00:11:34.938 fused_ordering(247) 00:11:34.938 fused_ordering(248) 00:11:34.938 fused_ordering(249) 00:11:34.938 fused_ordering(250) 00:11:34.938 fused_ordering(251) 00:11:34.938 fused_ordering(252) 00:11:34.938 fused_ordering(253) 00:11:34.938 fused_ordering(254) 00:11:34.938 fused_ordering(255) 00:11:34.938 fused_ordering(256) 00:11:34.938 fused_ordering(257) 00:11:34.938 fused_ordering(258) 00:11:34.938 fused_ordering(259) 00:11:34.938 fused_ordering(260) 00:11:34.938 fused_ordering(261) 00:11:34.938 fused_ordering(262) 00:11:34.938 fused_ordering(263) 00:11:34.938 fused_ordering(264) 00:11:34.938 fused_ordering(265) 00:11:34.938 fused_ordering(266) 00:11:34.938 fused_ordering(267) 00:11:34.938 fused_ordering(268) 00:11:34.938 fused_ordering(269) 00:11:34.938 fused_ordering(270) 00:11:34.938 fused_ordering(271) 00:11:34.938 fused_ordering(272) 00:11:34.938 fused_ordering(273) 00:11:34.938 fused_ordering(274) 00:11:34.938 fused_ordering(275) 00:11:34.938 fused_ordering(276) 00:11:34.938 fused_ordering(277) 00:11:34.938 fused_ordering(278) 00:11:34.938 fused_ordering(279) 00:11:34.938 fused_ordering(280) 00:11:34.938 fused_ordering(281) 00:11:34.938 fused_ordering(282) 00:11:34.938 fused_ordering(283) 00:11:34.938 fused_ordering(284) 00:11:34.938 fused_ordering(285) 00:11:34.938 fused_ordering(286) 00:11:34.938 fused_ordering(287) 00:11:34.938 fused_ordering(288) 00:11:34.938 fused_ordering(289) 00:11:34.938 fused_ordering(290) 00:11:34.938 fused_ordering(291) 00:11:34.938 fused_ordering(292) 00:11:34.938 fused_ordering(293) 00:11:34.938 fused_ordering(294) 00:11:34.938 fused_ordering(295) 00:11:34.938 fused_ordering(296) 00:11:34.938 fused_ordering(297) 00:11:34.938 fused_ordering(298) 00:11:34.938 fused_ordering(299) 00:11:34.938 fused_ordering(300) 00:11:34.938 fused_ordering(301) 00:11:34.938 fused_ordering(302) 00:11:34.938 fused_ordering(303) 00:11:34.938 fused_ordering(304) 00:11:34.938 fused_ordering(305) 00:11:34.938 fused_ordering(306) 00:11:34.938 fused_ordering(307) 00:11:34.938 fused_ordering(308) 00:11:34.938 fused_ordering(309) 00:11:34.938 fused_ordering(310) 00:11:34.938 fused_ordering(311) 00:11:34.938 fused_ordering(312) 00:11:34.938 fused_ordering(313) 00:11:34.938 fused_ordering(314) 00:11:34.938 fused_ordering(315) 00:11:34.938 fused_ordering(316) 00:11:34.938 fused_ordering(317) 00:11:34.938 fused_ordering(318) 00:11:34.938 fused_ordering(319) 00:11:34.938 fused_ordering(320) 00:11:34.938 fused_ordering(321) 00:11:34.938 fused_ordering(322) 00:11:34.938 fused_ordering(323) 00:11:34.938 fused_ordering(324) 00:11:34.938 fused_ordering(325) 00:11:34.938 fused_ordering(326) 00:11:34.938 fused_ordering(327) 00:11:34.938 fused_ordering(328) 00:11:34.938 fused_ordering(329) 00:11:34.938 fused_ordering(330) 00:11:34.938 fused_ordering(331) 00:11:34.938 fused_ordering(332) 00:11:34.938 fused_ordering(333) 00:11:34.938 fused_ordering(334) 00:11:34.938 fused_ordering(335) 00:11:34.938 fused_ordering(336) 00:11:34.938 fused_ordering(337) 00:11:34.938 fused_ordering(338) 00:11:34.938 fused_ordering(339) 00:11:34.938 fused_ordering(340) 00:11:34.938 fused_ordering(341) 00:11:34.938 fused_ordering(342) 00:11:34.938 fused_ordering(343) 00:11:34.938 fused_ordering(344) 00:11:34.938 fused_ordering(345) 00:11:34.938 fused_ordering(346) 00:11:34.938 fused_ordering(347) 00:11:34.938 fused_ordering(348) 00:11:34.938 fused_ordering(349) 00:11:34.938 fused_ordering(350) 00:11:34.938 fused_ordering(351) 00:11:34.938 fused_ordering(352) 00:11:34.938 fused_ordering(353) 00:11:34.938 fused_ordering(354) 00:11:34.938 fused_ordering(355) 00:11:34.938 fused_ordering(356) 00:11:34.938 fused_ordering(357) 00:11:34.938 fused_ordering(358) 00:11:34.938 fused_ordering(359) 00:11:34.938 fused_ordering(360) 00:11:34.938 fused_ordering(361) 00:11:34.938 fused_ordering(362) 00:11:34.938 fused_ordering(363) 00:11:34.938 fused_ordering(364) 00:11:34.938 fused_ordering(365) 00:11:34.938 fused_ordering(366) 00:11:34.938 fused_ordering(367) 00:11:34.938 fused_ordering(368) 00:11:34.938 fused_ordering(369) 00:11:34.938 fused_ordering(370) 00:11:34.938 fused_ordering(371) 00:11:34.938 fused_ordering(372) 00:11:34.938 fused_ordering(373) 00:11:34.938 fused_ordering(374) 00:11:34.938 fused_ordering(375) 00:11:34.938 fused_ordering(376) 00:11:34.938 fused_ordering(377) 00:11:34.938 fused_ordering(378) 00:11:34.938 fused_ordering(379) 00:11:34.938 fused_ordering(380) 00:11:34.938 fused_ordering(381) 00:11:34.938 fused_ordering(382) 00:11:34.938 fused_ordering(383) 00:11:34.938 fused_ordering(384) 00:11:34.938 fused_ordering(385) 00:11:34.938 fused_ordering(386) 00:11:34.938 fused_ordering(387) 00:11:34.938 fused_ordering(388) 00:11:34.938 fused_ordering(389) 00:11:34.938 fused_ordering(390) 00:11:34.938 fused_ordering(391) 00:11:34.938 fused_ordering(392) 00:11:34.938 fused_ordering(393) 00:11:34.938 fused_ordering(394) 00:11:34.938 fused_ordering(395) 00:11:34.938 fused_ordering(396) 00:11:34.938 fused_ordering(397) 00:11:34.938 fused_ordering(398) 00:11:34.938 fused_ordering(399) 00:11:34.938 fused_ordering(400) 00:11:34.938 fused_ordering(401) 00:11:34.938 fused_ordering(402) 00:11:34.938 fused_ordering(403) 00:11:34.938 fused_ordering(404) 00:11:34.938 fused_ordering(405) 00:11:34.938 fused_ordering(406) 00:11:34.938 fused_ordering(407) 00:11:34.938 fused_ordering(408) 00:11:34.938 fused_ordering(409) 00:11:34.938 fused_ordering(410) 00:11:35.197 fused_ordering(411) 00:11:35.197 fused_ordering(412) 00:11:35.197 fused_ordering(413) 00:11:35.197 fused_ordering(414) 00:11:35.197 fused_ordering(415) 00:11:35.197 fused_ordering(416) 00:11:35.197 fused_ordering(417) 00:11:35.197 fused_ordering(418) 00:11:35.197 fused_ordering(419) 00:11:35.197 fused_ordering(420) 00:11:35.197 fused_ordering(421) 00:11:35.197 fused_ordering(422) 00:11:35.197 fused_ordering(423) 00:11:35.197 fused_ordering(424) 00:11:35.197 fused_ordering(425) 00:11:35.197 fused_ordering(426) 00:11:35.197 fused_ordering(427) 00:11:35.197 fused_ordering(428) 00:11:35.197 fused_ordering(429) 00:11:35.197 fused_ordering(430) 00:11:35.197 fused_ordering(431) 00:11:35.197 fused_ordering(432) 00:11:35.197 fused_ordering(433) 00:11:35.197 fused_ordering(434) 00:11:35.197 fused_ordering(435) 00:11:35.197 fused_ordering(436) 00:11:35.197 fused_ordering(437) 00:11:35.197 fused_ordering(438) 00:11:35.197 fused_ordering(439) 00:11:35.197 fused_ordering(440) 00:11:35.197 fused_ordering(441) 00:11:35.197 fused_ordering(442) 00:11:35.197 fused_ordering(443) 00:11:35.197 fused_ordering(444) 00:11:35.197 fused_ordering(445) 00:11:35.197 fused_ordering(446) 00:11:35.197 fused_ordering(447) 00:11:35.197 fused_ordering(448) 00:11:35.197 fused_ordering(449) 00:11:35.197 fused_ordering(450) 00:11:35.197 fused_ordering(451) 00:11:35.197 fused_ordering(452) 00:11:35.197 fused_ordering(453) 00:11:35.197 fused_ordering(454) 00:11:35.197 fused_ordering(455) 00:11:35.197 fused_ordering(456) 00:11:35.197 fused_ordering(457) 00:11:35.197 fused_ordering(458) 00:11:35.197 fused_ordering(459) 00:11:35.197 fused_ordering(460) 00:11:35.197 fused_ordering(461) 00:11:35.197 fused_ordering(462) 00:11:35.197 fused_ordering(463) 00:11:35.197 fused_ordering(464) 00:11:35.197 fused_ordering(465) 00:11:35.197 fused_ordering(466) 00:11:35.197 fused_ordering(467) 00:11:35.197 fused_ordering(468) 00:11:35.197 fused_ordering(469) 00:11:35.197 fused_ordering(470) 00:11:35.197 fused_ordering(471) 00:11:35.197 fused_ordering(472) 00:11:35.197 fused_ordering(473) 00:11:35.197 fused_ordering(474) 00:11:35.197 fused_ordering(475) 00:11:35.197 fused_ordering(476) 00:11:35.197 fused_ordering(477) 00:11:35.197 fused_ordering(478) 00:11:35.197 fused_ordering(479) 00:11:35.197 fused_ordering(480) 00:11:35.197 fused_ordering(481) 00:11:35.197 fused_ordering(482) 00:11:35.197 fused_ordering(483) 00:11:35.197 fused_ordering(484) 00:11:35.197 fused_ordering(485) 00:11:35.197 fused_ordering(486) 00:11:35.197 fused_ordering(487) 00:11:35.197 fused_ordering(488) 00:11:35.197 fused_ordering(489) 00:11:35.197 fused_ordering(490) 00:11:35.197 fused_ordering(491) 00:11:35.197 fused_ordering(492) 00:11:35.197 fused_ordering(493) 00:11:35.197 fused_ordering(494) 00:11:35.197 fused_ordering(495) 00:11:35.197 fused_ordering(496) 00:11:35.197 fused_ordering(497) 00:11:35.197 fused_ordering(498) 00:11:35.197 fused_ordering(499) 00:11:35.197 fused_ordering(500) 00:11:35.197 fused_ordering(501) 00:11:35.197 fused_ordering(502) 00:11:35.197 fused_ordering(503) 00:11:35.197 fused_ordering(504) 00:11:35.197 fused_ordering(505) 00:11:35.197 fused_ordering(506) 00:11:35.197 fused_ordering(507) 00:11:35.197 fused_ordering(508) 00:11:35.198 fused_ordering(509) 00:11:35.198 fused_ordering(510) 00:11:35.198 fused_ordering(511) 00:11:35.198 fused_ordering(512) 00:11:35.198 fused_ordering(513) 00:11:35.198 fused_ordering(514) 00:11:35.198 fused_ordering(515) 00:11:35.198 fused_ordering(516) 00:11:35.198 fused_ordering(517) 00:11:35.198 fused_ordering(518) 00:11:35.198 fused_ordering(519) 00:11:35.198 fused_ordering(520) 00:11:35.198 fused_ordering(521) 00:11:35.198 fused_ordering(522) 00:11:35.198 fused_ordering(523) 00:11:35.198 fused_ordering(524) 00:11:35.198 fused_ordering(525) 00:11:35.198 fused_ordering(526) 00:11:35.198 fused_ordering(527) 00:11:35.198 fused_ordering(528) 00:11:35.198 fused_ordering(529) 00:11:35.198 fused_ordering(530) 00:11:35.198 fused_ordering(531) 00:11:35.198 fused_ordering(532) 00:11:35.198 fused_ordering(533) 00:11:35.198 fused_ordering(534) 00:11:35.198 fused_ordering(535) 00:11:35.198 fused_ordering(536) 00:11:35.198 fused_ordering(537) 00:11:35.198 fused_ordering(538) 00:11:35.198 fused_ordering(539) 00:11:35.198 fused_ordering(540) 00:11:35.198 fused_ordering(541) 00:11:35.198 fused_ordering(542) 00:11:35.198 fused_ordering(543) 00:11:35.198 fused_ordering(544) 00:11:35.198 fused_ordering(545) 00:11:35.198 fused_ordering(546) 00:11:35.198 fused_ordering(547) 00:11:35.198 fused_ordering(548) 00:11:35.198 fused_ordering(549) 00:11:35.198 fused_ordering(550) 00:11:35.198 fused_ordering(551) 00:11:35.198 fused_ordering(552) 00:11:35.198 fused_ordering(553) 00:11:35.198 fused_ordering(554) 00:11:35.198 fused_ordering(555) 00:11:35.198 fused_ordering(556) 00:11:35.198 fused_ordering(557) 00:11:35.198 fused_ordering(558) 00:11:35.198 fused_ordering(559) 00:11:35.198 fused_ordering(560) 00:11:35.198 fused_ordering(561) 00:11:35.198 fused_ordering(562) 00:11:35.198 fused_ordering(563) 00:11:35.198 fused_ordering(564) 00:11:35.198 fused_ordering(565) 00:11:35.198 fused_ordering(566) 00:11:35.198 fused_ordering(567) 00:11:35.198 fused_ordering(568) 00:11:35.198 fused_ordering(569) 00:11:35.198 fused_ordering(570) 00:11:35.198 fused_ordering(571) 00:11:35.198 fused_ordering(572) 00:11:35.198 fused_ordering(573) 00:11:35.198 fused_ordering(574) 00:11:35.198 fused_ordering(575) 00:11:35.198 fused_ordering(576) 00:11:35.198 fused_ordering(577) 00:11:35.198 fused_ordering(578) 00:11:35.198 fused_ordering(579) 00:11:35.198 fused_ordering(580) 00:11:35.198 fused_ordering(581) 00:11:35.198 fused_ordering(582) 00:11:35.198 fused_ordering(583) 00:11:35.198 fused_ordering(584) 00:11:35.198 fused_ordering(585) 00:11:35.198 fused_ordering(586) 00:11:35.198 fused_ordering(587) 00:11:35.198 fused_ordering(588) 00:11:35.198 fused_ordering(589) 00:11:35.198 fused_ordering(590) 00:11:35.198 fused_ordering(591) 00:11:35.198 fused_ordering(592) 00:11:35.198 fused_ordering(593) 00:11:35.198 fused_ordering(594) 00:11:35.198 fused_ordering(595) 00:11:35.198 fused_ordering(596) 00:11:35.198 fused_ordering(597) 00:11:35.198 fused_ordering(598) 00:11:35.198 fused_ordering(599) 00:11:35.198 fused_ordering(600) 00:11:35.198 fused_ordering(601) 00:11:35.198 fused_ordering(602) 00:11:35.198 fused_ordering(603) 00:11:35.198 fused_ordering(604) 00:11:35.198 fused_ordering(605) 00:11:35.198 fused_ordering(606) 00:11:35.198 fused_ordering(607) 00:11:35.198 fused_ordering(608) 00:11:35.198 fused_ordering(609) 00:11:35.198 fused_ordering(610) 00:11:35.198 fused_ordering(611) 00:11:35.198 fused_ordering(612) 00:11:35.198 fused_ordering(613) 00:11:35.198 fused_ordering(614) 00:11:35.198 fused_ordering(615) 00:11:35.770 fused_ordering(616) 00:11:35.770 fused_ordering(617) 00:11:35.770 fused_ordering(618) 00:11:35.770 fused_ordering(619) 00:11:35.770 fused_ordering(620) 00:11:35.770 fused_ordering(621) 00:11:35.770 fused_ordering(622) 00:11:35.770 fused_ordering(623) 00:11:35.770 fused_ordering(624) 00:11:35.770 fused_ordering(625) 00:11:35.770 fused_ordering(626) 00:11:35.770 fused_ordering(627) 00:11:35.770 fused_ordering(628) 00:11:35.770 fused_ordering(629) 00:11:35.770 fused_ordering(630) 00:11:35.770 fused_ordering(631) 00:11:35.770 fused_ordering(632) 00:11:35.770 fused_ordering(633) 00:11:35.770 fused_ordering(634) 00:11:35.770 fused_ordering(635) 00:11:35.770 fused_ordering(636) 00:11:35.770 fused_ordering(637) 00:11:35.770 fused_ordering(638) 00:11:35.770 fused_ordering(639) 00:11:35.770 fused_ordering(640) 00:11:35.770 fused_ordering(641) 00:11:35.770 fused_ordering(642) 00:11:35.770 fused_ordering(643) 00:11:35.770 fused_ordering(644) 00:11:35.770 fused_ordering(645) 00:11:35.770 fused_ordering(646) 00:11:35.770 fused_ordering(647) 00:11:35.770 fused_ordering(648) 00:11:35.770 fused_ordering(649) 00:11:35.770 fused_ordering(650) 00:11:35.770 fused_ordering(651) 00:11:35.770 fused_ordering(652) 00:11:35.770 fused_ordering(653) 00:11:35.770 fused_ordering(654) 00:11:35.770 fused_ordering(655) 00:11:35.770 fused_ordering(656) 00:11:35.770 fused_ordering(657) 00:11:35.770 fused_ordering(658) 00:11:35.770 fused_ordering(659) 00:11:35.770 fused_ordering(660) 00:11:35.770 fused_ordering(661) 00:11:35.770 fused_ordering(662) 00:11:35.770 fused_ordering(663) 00:11:35.770 fused_ordering(664) 00:11:35.770 fused_ordering(665) 00:11:35.770 fused_ordering(666) 00:11:35.770 fused_ordering(667) 00:11:35.770 fused_ordering(668) 00:11:35.770 fused_ordering(669) 00:11:35.770 fused_ordering(670) 00:11:35.770 fused_ordering(671) 00:11:35.770 fused_ordering(672) 00:11:35.770 fused_ordering(673) 00:11:35.770 fused_ordering(674) 00:11:35.770 fused_ordering(675) 00:11:35.770 fused_ordering(676) 00:11:35.770 fused_ordering(677) 00:11:35.770 fused_ordering(678) 00:11:35.770 fused_ordering(679) 00:11:35.770 fused_ordering(680) 00:11:35.770 fused_ordering(681) 00:11:35.770 fused_ordering(682) 00:11:35.770 fused_ordering(683) 00:11:35.770 fused_ordering(684) 00:11:35.770 fused_ordering(685) 00:11:35.770 fused_ordering(686) 00:11:35.770 fused_ordering(687) 00:11:35.770 fused_ordering(688) 00:11:35.770 fused_ordering(689) 00:11:35.770 fused_ordering(690) 00:11:35.770 fused_ordering(691) 00:11:35.770 fused_ordering(692) 00:11:35.770 fused_ordering(693) 00:11:35.770 fused_ordering(694) 00:11:35.770 fused_ordering(695) 00:11:35.770 fused_ordering(696) 00:11:35.770 fused_ordering(697) 00:11:35.770 fused_ordering(698) 00:11:35.770 fused_ordering(699) 00:11:35.770 fused_ordering(700) 00:11:35.770 fused_ordering(701) 00:11:35.770 fused_ordering(702) 00:11:35.770 fused_ordering(703) 00:11:35.770 fused_ordering(704) 00:11:35.770 fused_ordering(705) 00:11:35.770 fused_ordering(706) 00:11:35.770 fused_ordering(707) 00:11:35.770 fused_ordering(708) 00:11:35.770 fused_ordering(709) 00:11:35.770 fused_ordering(710) 00:11:35.770 fused_ordering(711) 00:11:35.770 fused_ordering(712) 00:11:35.770 fused_ordering(713) 00:11:35.770 fused_ordering(714) 00:11:35.770 fused_ordering(715) 00:11:35.770 fused_ordering(716) 00:11:35.770 fused_ordering(717) 00:11:35.771 fused_ordering(718) 00:11:35.771 fused_ordering(719) 00:11:35.771 fused_ordering(720) 00:11:35.771 fused_ordering(721) 00:11:35.771 fused_ordering(722) 00:11:35.771 fused_ordering(723) 00:11:35.771 fused_ordering(724) 00:11:35.771 fused_ordering(725) 00:11:35.771 fused_ordering(726) 00:11:35.771 fused_ordering(727) 00:11:35.771 fused_ordering(728) 00:11:35.771 fused_ordering(729) 00:11:35.771 fused_ordering(730) 00:11:35.771 fused_ordering(731) 00:11:35.771 fused_ordering(732) 00:11:35.771 fused_ordering(733) 00:11:35.771 fused_ordering(734) 00:11:35.771 fused_ordering(735) 00:11:35.771 fused_ordering(736) 00:11:35.771 fused_ordering(737) 00:11:35.771 fused_ordering(738) 00:11:35.771 fused_ordering(739) 00:11:35.771 fused_ordering(740) 00:11:35.771 fused_ordering(741) 00:11:35.771 fused_ordering(742) 00:11:35.771 fused_ordering(743) 00:11:35.771 fused_ordering(744) 00:11:35.771 fused_ordering(745) 00:11:35.771 fused_ordering(746) 00:11:35.771 fused_ordering(747) 00:11:35.771 fused_ordering(748) 00:11:35.771 fused_ordering(749) 00:11:35.771 fused_ordering(750) 00:11:35.771 fused_ordering(751) 00:11:35.771 fused_ordering(752) 00:11:35.771 fused_ordering(753) 00:11:35.771 fused_ordering(754) 00:11:35.771 fused_ordering(755) 00:11:35.771 fused_ordering(756) 00:11:35.771 fused_ordering(757) 00:11:35.771 fused_ordering(758) 00:11:35.771 fused_ordering(759) 00:11:35.771 fused_ordering(760) 00:11:35.771 fused_ordering(761) 00:11:35.771 fused_ordering(762) 00:11:35.771 fused_ordering(763) 00:11:35.771 fused_ordering(764) 00:11:35.771 fused_ordering(765) 00:11:35.771 fused_ordering(766) 00:11:35.771 fused_ordering(767) 00:11:35.771 fused_ordering(768) 00:11:35.771 fused_ordering(769) 00:11:35.771 fused_ordering(770) 00:11:35.771 fused_ordering(771) 00:11:35.771 fused_ordering(772) 00:11:35.771 fused_ordering(773) 00:11:35.771 fused_ordering(774) 00:11:35.771 fused_ordering(775) 00:11:35.771 fused_ordering(776) 00:11:35.771 fused_ordering(777) 00:11:35.771 fused_ordering(778) 00:11:35.771 fused_ordering(779) 00:11:35.771 fused_ordering(780) 00:11:35.771 fused_ordering(781) 00:11:35.771 fused_ordering(782) 00:11:35.771 fused_ordering(783) 00:11:35.771 fused_ordering(784) 00:11:35.771 fused_ordering(785) 00:11:35.771 fused_ordering(786) 00:11:35.771 fused_ordering(787) 00:11:35.771 fused_ordering(788) 00:11:35.771 fused_ordering(789) 00:11:35.771 fused_ordering(790) 00:11:35.771 fused_ordering(791) 00:11:35.771 fused_ordering(792) 00:11:35.771 fused_ordering(793) 00:11:35.771 fused_ordering(794) 00:11:35.771 fused_ordering(795) 00:11:35.771 fused_ordering(796) 00:11:35.771 fused_ordering(797) 00:11:35.771 fused_ordering(798) 00:11:35.771 fused_ordering(799) 00:11:35.771 fused_ordering(800) 00:11:35.771 fused_ordering(801) 00:11:35.771 fused_ordering(802) 00:11:35.771 fused_ordering(803) 00:11:35.771 fused_ordering(804) 00:11:35.771 fused_ordering(805) 00:11:35.771 fused_ordering(806) 00:11:35.771 fused_ordering(807) 00:11:35.771 fused_ordering(808) 00:11:35.771 fused_ordering(809) 00:11:35.771 fused_ordering(810) 00:11:35.771 fused_ordering(811) 00:11:35.771 fused_ordering(812) 00:11:35.771 fused_ordering(813) 00:11:35.771 fused_ordering(814) 00:11:35.771 fused_ordering(815) 00:11:35.771 fused_ordering(816) 00:11:35.771 fused_ordering(817) 00:11:35.771 fused_ordering(818) 00:11:35.771 fused_ordering(819) 00:11:35.771 fused_ordering(820) 00:11:36.339 fused_ordering(821) 00:11:36.339 fused_ordering(822) 00:11:36.339 fused_ordering(823) 00:11:36.339 fused_ordering(824) 00:11:36.339 fused_ordering(825) 00:11:36.339 fused_ordering(826) 00:11:36.339 fused_ordering(827) 00:11:36.339 fused_ordering(828) 00:11:36.339 fused_ordering(829) 00:11:36.339 fused_ordering(830) 00:11:36.339 fused_ordering(831) 00:11:36.339 fused_ordering(832) 00:11:36.339 fused_ordering(833) 00:11:36.339 fused_ordering(834) 00:11:36.339 fused_ordering(835) 00:11:36.339 fused_ordering(836) 00:11:36.339 fused_ordering(837) 00:11:36.339 fused_ordering(838) 00:11:36.339 fused_ordering(839) 00:11:36.339 fused_ordering(840) 00:11:36.339 fused_ordering(841) 00:11:36.339 fused_ordering(842) 00:11:36.339 fused_ordering(843) 00:11:36.339 fused_ordering(844) 00:11:36.339 fused_ordering(845) 00:11:36.339 fused_ordering(846) 00:11:36.339 fused_ordering(847) 00:11:36.339 fused_ordering(848) 00:11:36.339 fused_ordering(849) 00:11:36.339 fused_ordering(850) 00:11:36.339 fused_ordering(851) 00:11:36.339 fused_ordering(852) 00:11:36.339 fused_ordering(853) 00:11:36.339 fused_ordering(854) 00:11:36.339 fused_ordering(855) 00:11:36.339 fused_ordering(856) 00:11:36.339 fused_ordering(857) 00:11:36.339 fused_ordering(858) 00:11:36.339 fused_ordering(859) 00:11:36.339 fused_ordering(860) 00:11:36.339 fused_ordering(861) 00:11:36.339 fused_ordering(862) 00:11:36.339 fused_ordering(863) 00:11:36.339 fused_ordering(864) 00:11:36.339 fused_ordering(865) 00:11:36.339 fused_ordering(866) 00:11:36.339 fused_ordering(867) 00:11:36.339 fused_ordering(868) 00:11:36.339 fused_ordering(869) 00:11:36.339 fused_ordering(870) 00:11:36.339 fused_ordering(871) 00:11:36.339 fused_ordering(872) 00:11:36.339 fused_ordering(873) 00:11:36.339 fused_ordering(874) 00:11:36.339 fused_ordering(875) 00:11:36.339 fused_ordering(876) 00:11:36.339 fused_ordering(877) 00:11:36.339 fused_ordering(878) 00:11:36.339 fused_ordering(879) 00:11:36.339 fused_ordering(880) 00:11:36.339 fused_ordering(881) 00:11:36.339 fused_ordering(882) 00:11:36.339 fused_ordering(883) 00:11:36.339 fused_ordering(884) 00:11:36.339 fused_ordering(885) 00:11:36.339 fused_ordering(886) 00:11:36.339 fused_ordering(887) 00:11:36.339 fused_ordering(888) 00:11:36.339 fused_ordering(889) 00:11:36.339 fused_ordering(890) 00:11:36.339 fused_ordering(891) 00:11:36.339 fused_ordering(892) 00:11:36.339 fused_ordering(893) 00:11:36.339 fused_ordering(894) 00:11:36.339 fused_ordering(895) 00:11:36.339 fused_ordering(896) 00:11:36.339 fused_ordering(897) 00:11:36.339 fused_ordering(898) 00:11:36.339 fused_ordering(899) 00:11:36.339 fused_ordering(900) 00:11:36.339 fused_ordering(901) 00:11:36.339 fused_ordering(902) 00:11:36.339 fused_ordering(903) 00:11:36.339 fused_ordering(904) 00:11:36.339 fused_ordering(905) 00:11:36.339 fused_ordering(906) 00:11:36.339 fused_ordering(907) 00:11:36.339 fused_ordering(908) 00:11:36.339 fused_ordering(909) 00:11:36.339 fused_ordering(910) 00:11:36.339 fused_ordering(911) 00:11:36.339 fused_ordering(912) 00:11:36.339 fused_ordering(913) 00:11:36.339 fused_ordering(914) 00:11:36.339 fused_ordering(915) 00:11:36.339 fused_ordering(916) 00:11:36.339 fused_ordering(917) 00:11:36.339 fused_ordering(918) 00:11:36.339 fused_ordering(919) 00:11:36.339 fused_ordering(920) 00:11:36.339 fused_ordering(921) 00:11:36.339 fused_ordering(922) 00:11:36.339 fused_ordering(923) 00:11:36.339 fused_ordering(924) 00:11:36.339 fused_ordering(925) 00:11:36.339 fused_ordering(926) 00:11:36.339 fused_ordering(927) 00:11:36.339 fused_ordering(928) 00:11:36.339 fused_ordering(929) 00:11:36.339 fused_ordering(930) 00:11:36.339 fused_ordering(931) 00:11:36.339 fused_ordering(932) 00:11:36.339 fused_ordering(933) 00:11:36.339 fused_ordering(934) 00:11:36.339 fused_ordering(935) 00:11:36.339 fused_ordering(936) 00:11:36.339 fused_ordering(937) 00:11:36.339 fused_ordering(938) 00:11:36.339 fused_ordering(939) 00:11:36.339 fused_ordering(940) 00:11:36.339 fused_ordering(941) 00:11:36.339 fused_ordering(942) 00:11:36.339 fused_ordering(943) 00:11:36.339 fused_ordering(944) 00:11:36.339 fused_ordering(945) 00:11:36.339 fused_ordering(946) 00:11:36.339 fused_ordering(947) 00:11:36.339 fused_ordering(948) 00:11:36.339 fused_ordering(949) 00:11:36.339 fused_ordering(950) 00:11:36.339 fused_ordering(951) 00:11:36.339 fused_ordering(952) 00:11:36.339 fused_ordering(953) 00:11:36.339 fused_ordering(954) 00:11:36.339 fused_ordering(955) 00:11:36.339 fused_ordering(956) 00:11:36.339 fused_ordering(957) 00:11:36.339 fused_ordering(958) 00:11:36.339 fused_ordering(959) 00:11:36.339 fused_ordering(960) 00:11:36.339 fused_ordering(961) 00:11:36.339 fused_ordering(962) 00:11:36.339 fused_ordering(963) 00:11:36.339 fused_ordering(964) 00:11:36.339 fused_ordering(965) 00:11:36.339 fused_ordering(966) 00:11:36.339 fused_ordering(967) 00:11:36.339 fused_ordering(968) 00:11:36.339 fused_ordering(969) 00:11:36.339 fused_ordering(970) 00:11:36.339 fused_ordering(971) 00:11:36.339 fused_ordering(972) 00:11:36.339 fused_ordering(973) 00:11:36.339 fused_ordering(974) 00:11:36.339 fused_ordering(975) 00:11:36.339 fused_ordering(976) 00:11:36.339 fused_ordering(977) 00:11:36.339 fused_ordering(978) 00:11:36.339 fused_ordering(979) 00:11:36.339 fused_ordering(980) 00:11:36.339 fused_ordering(981) 00:11:36.339 fused_ordering(982) 00:11:36.339 fused_ordering(983) 00:11:36.339 fused_ordering(984) 00:11:36.339 fused_ordering(985) 00:11:36.339 fused_ordering(986) 00:11:36.339 fused_ordering(987) 00:11:36.339 fused_ordering(988) 00:11:36.339 fused_ordering(989) 00:11:36.339 fused_ordering(990) 00:11:36.339 fused_ordering(991) 00:11:36.339 fused_ordering(992) 00:11:36.339 fused_ordering(993) 00:11:36.339 fused_ordering(994) 00:11:36.339 fused_ordering(995) 00:11:36.339 fused_ordering(996) 00:11:36.339 fused_ordering(997) 00:11:36.339 fused_ordering(998) 00:11:36.339 fused_ordering(999) 00:11:36.339 fused_ordering(1000) 00:11:36.339 fused_ordering(1001) 00:11:36.339 fused_ordering(1002) 00:11:36.339 fused_ordering(1003) 00:11:36.339 fused_ordering(1004) 00:11:36.339 fused_ordering(1005) 00:11:36.339 fused_ordering(1006) 00:11:36.339 fused_ordering(1007) 00:11:36.339 fused_ordering(1008) 00:11:36.339 fused_ordering(1009) 00:11:36.339 fused_ordering(1010) 00:11:36.339 fused_ordering(1011) 00:11:36.339 fused_ordering(1012) 00:11:36.339 fused_ordering(1013) 00:11:36.339 fused_ordering(1014) 00:11:36.339 fused_ordering(1015) 00:11:36.339 fused_ordering(1016) 00:11:36.339 fused_ordering(1017) 00:11:36.339 fused_ordering(1018) 00:11:36.339 fused_ordering(1019) 00:11:36.339 fused_ordering(1020) 00:11:36.339 fused_ordering(1021) 00:11:36.339 fused_ordering(1022) 00:11:36.339 fused_ordering(1023) 00:11:36.340 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:36.340 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:36.340 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:36.340 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:36.340 rmmod nvme_tcp 00:11:36.340 rmmod nvme_fabrics 00:11:36.340 rmmod nvme_keyring 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 75615 ']' 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 75615 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75615 ']' 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75615 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75615 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75615' 00:11:36.340 killing process with pid 75615 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75615 00:11:36.340 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75615 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@254 -- # local dev 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:11:36.598 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:11:36.599 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # continue 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # continue 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@274 -- # iptr 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-save 00:11:36.857 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-restore 00:11:36.857 00:11:36.857 real 0m3.898s 00:11:36.857 user 0m4.386s 00:11:36.858 sys 0m1.510s 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:36.858 ************************************ 00:11:36.858 END TEST nvmf_fused_ordering 00:11:36.858 ************************************ 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:36.858 ************************************ 00:11:36.858 START TEST nvmf_ns_masking 00:11:36.858 ************************************ 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:36.858 * Looking for test storage... 00:11:36.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:36.858 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:37.117 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.118 --rc genhtml_branch_coverage=1 00:11:37.118 --rc genhtml_function_coverage=1 00:11:37.118 --rc genhtml_legend=1 00:11:37.118 --rc geninfo_all_blocks=1 00:11:37.118 --rc geninfo_unexecuted_blocks=1 00:11:37.118 00:11:37.118 ' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.118 --rc genhtml_branch_coverage=1 00:11:37.118 --rc genhtml_function_coverage=1 00:11:37.118 --rc genhtml_legend=1 00:11:37.118 --rc geninfo_all_blocks=1 00:11:37.118 --rc geninfo_unexecuted_blocks=1 00:11:37.118 00:11:37.118 ' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.118 --rc genhtml_branch_coverage=1 00:11:37.118 --rc genhtml_function_coverage=1 00:11:37.118 --rc genhtml_legend=1 00:11:37.118 --rc geninfo_all_blocks=1 00:11:37.118 --rc geninfo_unexecuted_blocks=1 00:11:37.118 00:11:37.118 ' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:37.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:37.118 --rc genhtml_branch_coverage=1 00:11:37.118 --rc genhtml_function_coverage=1 00:11:37.118 --rc genhtml_legend=1 00:11:37.118 --rc geninfo_all_blocks=1 00:11:37.118 --rc geninfo_unexecuted_blocks=1 00:11:37.118 00:11:37.118 ' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:37.118 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:37.118 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e7046721-c0e5-4e74-a795-a1a41eb66c38 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5909e2f1-2b22-49be-97cb-ccc1737ae695 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d7f8c9a3-88b6-405e-a781-1779a6bf2155 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@280 -- # nvmf_veth_init 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@223 -- # create_target_ns 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # create_main_bridge 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@105 -- # delete_main_bridge 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # return 0 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up initiator0 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:37.119 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up target0 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target0 up 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up target0_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns target0 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:11:37.120 10.0.0.1 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:37.120 10.0.0.2 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up initiator0 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:11:37.120 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up target0_br 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up initiator1 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:37.120 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.121 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:37.121 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:37.121 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:11:37.121 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:37.121 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.121 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:37.121 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@151 -- # set_up target1 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target1 up 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # set_up target1_br 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns target1 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772163 00:11:37.381 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:11:37.382 10.0.0.3 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772164 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:11:37.382 10.0.0.4 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up initiator1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@129 -- # set_up target1_br 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 2 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator0 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:37.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:11:37.382 00:11:37.382 --- 10.0.0.1 ping statistics --- 00:11:37.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.382 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:11:37.382 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target0 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target0 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:37.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:11:37.383 00:11:37.383 --- 10.0.0.2 ping statistics --- 00:11:37.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.383 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:11:37.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:37.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:11:37.383 00:11:37.383 --- 10.0.0.3 ping statistics --- 00:11:37.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.383 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:11:37.383 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:37.383 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:11:37.383 00:11:37.383 --- 10.0.0.4 ping statistics --- 00:11:37.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.383 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # return 0 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:37.383 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo initiator1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=initiator1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target0 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo target1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=target1 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:11:37.384 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=75898 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 75898 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75898 ']' 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:37.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:37.643 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:37.643 [2024-11-20 09:05:16.394956] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:11:37.643 [2024-11-20 09:05:16.395119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.643 [2024-11-20 09:05:16.547421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.901 [2024-11-20 09:05:16.618035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.901 [2024-11-20 09:05:16.618116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.901 [2024-11-20 09:05:16.618130] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.901 [2024-11-20 09:05:16.618141] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.901 [2024-11-20 09:05:16.618150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.901 [2024-11-20 09:05:16.618617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.901 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.901 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:37.901 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:37.901 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:37.901 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:37.901 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.901 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:38.467 [2024-11-20 09:05:17.121653] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.467 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:38.467 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:38.467 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:38.725 Malloc1 00:11:38.725 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:38.984 Malloc2 00:11:38.984 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.243 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:39.501 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.758 [2024-11-20 09:05:18.608235] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.758 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:39.758 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7f8c9a3-88b6-405e-a781-1779a6bf2155 -a 10.0.0.2 -s 4420 -i 4 00:11:40.016 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.016 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.016 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.016 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.016 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.919 [ 0]:0x1 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.919 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.178 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc0c026139a04f2694b2898b72d55ec4 00:11:42.178 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc0c026139a04f2694b2898b72d55ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.178 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.437 [ 0]:0x1 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc0c026139a04f2694b2898b72d55ec4 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc0c026139a04f2694b2898b72d55ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.437 [ 1]:0x2 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cbad2286cb24571bb8ba30087d5a9ef 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cbad2286cb24571bb8ba30087d5a9ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:42.437 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.695 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.954 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:43.212 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:43.212 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7f8c9a3-88b6-405e-a781-1779a6bf2155 -a 10.0.0.2 -s 4420 -i 4 00:11:43.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:43.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:43.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:43.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:43.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:45.746 [ 0]:0x2 00:11:45.746 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cbad2286cb24571bb8ba30087d5a9ef 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cbad2286cb24571bb8ba30087d5a9ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:45.747 [ 0]:0x1 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:45.747 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc0c026139a04f2694b2898b72d55ec4 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc0c026139a04f2694b2898b72d55ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:46.006 [ 1]:0x2 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cbad2286cb24571bb8ba30087d5a9ef 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cbad2286cb24571bb8ba30087d5a9ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.006 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.265 [ 0]:0x2 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cbad2286cb24571bb8ba30087d5a9ef 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cbad2286cb24571bb8ba30087d5a9ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:46.265 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.523 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d7f8c9a3-88b6-405e-a781-1779a6bf2155 -a 10.0.0.2 -s 4420 -i 4 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:46.782 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:49.318 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:49.319 [ 0]:0x1 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=dc0c026139a04f2694b2898b72d55ec4 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ dc0c026139a04f2694b2898b72d55ec4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.319 [ 1]:0x2 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cbad2286cb24571bb8ba30087d5a9ef 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cbad2286cb24571bb8ba30087d5a9ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.319 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:49.319 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.578 [ 0]:0x2 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cbad2286cb24571bb8ba30087d5a9ef 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cbad2286cb24571bb8ba30087d5a9ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:49.578 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:49.837 [2024-11-20 09:05:28.619245] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:49.837 2024/11/20 09:05:28 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:11:49.837 request: 00:11:49.837 { 00:11:49.837 "method": "nvmf_ns_remove_host", 00:11:49.837 "params": { 00:11:49.837 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.837 "nsid": 2, 00:11:49.837 "host": "nqn.2016-06.io.spdk:host1" 00:11:49.837 } 00:11:49.837 } 00:11:49.837 Got JSON-RPC error response 00:11:49.837 GoRPCClient: error on JSON-RPC call 00:11:49.837 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:49.837 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:49.838 [ 0]:0x2 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:49.838 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0cbad2286cb24571bb8ba30087d5a9ef 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0cbad2286cb24571bb8ba30087d5a9ef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.097 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76267 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76267 /var/tmp/host.sock 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 76267 ']' 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.097 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.097 [2024-11-20 09:05:28.908616] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:11:50.097 [2024-11-20 09:05:28.909328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76267 ] 00:11:50.357 [2024-11-20 09:05:29.062787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.357 [2024-11-20 09:05:29.130853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.293 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.293 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:51.293 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.293 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.861 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e7046721-c0e5-4e74-a795-a1a41eb66c38 00:11:51.861 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:11:51.861 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E7046721C0E54E74A795A1A41EB66C38 -i 00:11:52.120 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5909e2f1-2b22-49be-97cb-ccc1737ae695 00:11:52.120 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:11:52.120 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5909E2F12B2249BE97CBCCC1737AE695 -i 00:11:52.380 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:52.639 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:52.897 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:52.897 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:53.156 nvme0n1 00:11:53.156 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:53.156 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:53.724 nvme1n2 00:11:53.724 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:53.724 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:53.724 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:53.724 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:53.724 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:53.982 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:53.982 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:53.982 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:53.982 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:54.241 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e7046721-c0e5-4e74-a795-a1a41eb66c38 == \e\7\0\4\6\7\2\1\-\c\0\e\5\-\4\e\7\4\-\a\7\9\5\-\a\1\a\4\1\e\b\6\6\c\3\8 ]] 00:11:54.241 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:54.241 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:54.241 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:54.501 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5909e2f1-2b22-49be-97cb-ccc1737ae695 == \5\9\0\9\e\2\f\1\-\2\b\2\2\-\4\9\b\e\-\9\7\c\b\-\c\c\c\1\7\3\7\a\e\6\9\5 ]] 00:11:54.501 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.759 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:55.017 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid e7046721-c0e5-4e74-a795-a1a41eb66c38 00:11:55.017 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:11:55.017 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E7046721C0E54E74A795A1A41EB66C38 00:11:55.017 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:55.017 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E7046721C0E54E74A795A1A41EB66C38 00:11:55.017 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.017 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.018 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.018 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.018 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.018 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.018 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.018 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:55.018 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g E7046721C0E54E74A795A1A41EB66C38 00:11:55.313 [2024-11-20 09:05:34.211529] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:11:55.313 [2024-11-20 09:05:34.211577] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:11:55.313 [2024-11-20 09:05:34.211599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:55.313 2024/11/20 09:05:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:E7046721C0E54E74A795A1A41EB66C38 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:11:55.313 request: 00:11:55.313 { 00:11:55.313 "method": "nvmf_subsystem_add_ns", 00:11:55.313 "params": { 00:11:55.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:55.313 "namespace": { 00:11:55.313 "bdev_name": "invalid", 00:11:55.313 "nsid": 1, 00:11:55.313 "nguid": "E7046721C0E54E74A795A1A41EB66C38", 00:11:55.313 "no_auto_visible": false 00:11:55.313 } 00:11:55.313 } 00:11:55.313 } 00:11:55.313 Got JSON-RPC error response 00:11:55.313 GoRPCClient: error on JSON-RPC call 00:11:55.572 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:55.572 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.572 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.572 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.572 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid e7046721-c0e5-4e74-a795-a1a41eb66c38 00:11:55.572 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:11:55.572 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E7046721C0E54E74A795A1A41EB66C38 -i 00:11:55.831 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:11:57.734 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:11:57.735 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:57.735 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 76267 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 76267 ']' 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 76267 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76267 00:11:57.994 killing process with pid 76267 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76267' 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 76267 00:11:57.994 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 76267 00:11:58.562 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:58.821 rmmod nvme_tcp 00:11:58.821 rmmod nvme_fabrics 00:11:58.821 rmmod nvme_keyring 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 75898 ']' 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 75898 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75898 ']' 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75898 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.821 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75898 00:11:59.079 killing process with pid 75898 00:11:59.080 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.080 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.080 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75898' 00:11:59.080 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75898 00:11:59.080 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75898 00:11:59.348 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:59.348 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:11:59.348 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@254 -- # local dev 00:11:59.348 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:59.348 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:59.348 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:59.348 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # continue 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # continue 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@274 -- # iptr 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-save 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-restore 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:59.348 00:11:59.348 real 0m22.569s 00:11:59.348 user 0m38.839s 00:11:59.348 sys 0m3.497s 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:59.348 ************************************ 00:11:59.348 END TEST nvmf_ns_masking 00:11:59.348 ************************************ 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.348 ************************************ 00:11:59.348 START TEST nvmf_auth_target 00:11:59.348 ************************************ 00:11:59.348 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:59.620 * Looking for test storage... 00:11:59.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.620 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:59.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.620 --rc genhtml_branch_coverage=1 00:11:59.620 --rc genhtml_function_coverage=1 00:11:59.621 --rc genhtml_legend=1 00:11:59.621 --rc geninfo_all_blocks=1 00:11:59.621 --rc geninfo_unexecuted_blocks=1 00:11:59.621 00:11:59.621 ' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.621 --rc genhtml_branch_coverage=1 00:11:59.621 --rc genhtml_function_coverage=1 00:11:59.621 --rc genhtml_legend=1 00:11:59.621 --rc geninfo_all_blocks=1 00:11:59.621 --rc geninfo_unexecuted_blocks=1 00:11:59.621 00:11:59.621 ' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.621 --rc genhtml_branch_coverage=1 00:11:59.621 --rc genhtml_function_coverage=1 00:11:59.621 --rc genhtml_legend=1 00:11:59.621 --rc geninfo_all_blocks=1 00:11:59.621 --rc geninfo_unexecuted_blocks=1 00:11:59.621 00:11:59.621 ' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:59.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.621 --rc genhtml_branch_coverage=1 00:11:59.621 --rc genhtml_function_coverage=1 00:11:59.621 --rc genhtml_legend=1 00:11:59.621 --rc geninfo_all_blocks=1 00:11:59.621 --rc geninfo_unexecuted_blocks=1 00:11:59.621 00:11:59.621 ' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:59.621 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:11:59.621 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@223 -- # create_target_ns 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target0 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:59.622 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:11:59.883 10.0.0.1 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:59.883 10.0.0.2 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:59.883 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@151 -- # set_up target1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772163 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:11:59.884 10.0.0.3 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772164 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:11:59.884 10.0.0.4 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:59.884 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:59.885 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:59.885 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:59.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:11:59.885 00:11:59.885 --- 10.0.0.1 ping statistics --- 00:11:59.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.885 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:59.885 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:59.885 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:00.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:12:00.145 00:12:00.145 --- 10.0.0.2 ping statistics --- 00:12:00.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.145 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:00.145 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:12:00.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:00.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:12:00.146 00:12:00.146 --- 10.0.0.3 ping statistics --- 00:12:00.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.146 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:12:00.146 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:00.146 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:12:00.146 00:12:00.146 --- 10.0.0.4 ping statistics --- 00:12:00.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.146 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # return 0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target0 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:00.146 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo target1 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=target1 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=76766 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 76766 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76766 ']' 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.147 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76791 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=78dd30c48e165bf5104245ee75d7d2c2bfff2d70a8dc9aa6 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.CFd 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 78dd30c48e165bf5104245ee75d7d2c2bfff2d70a8dc9aa6 0 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 78dd30c48e165bf5104245ee75d7d2c2bfff2d70a8dc9aa6 0 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=78dd30c48e165bf5104245ee75d7d2c2bfff2d70a8dc9aa6 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.CFd 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.CFd 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.CFd 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=630676fabd0c52e0127f6bdc709db5ba381fcb32e92f55815ea0ac55d2340215 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.bBQ 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 630676fabd0c52e0127f6bdc709db5ba381fcb32e92f55815ea0ac55d2340215 3 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 630676fabd0c52e0127f6bdc709db5ba381fcb32e92f55815ea0ac55d2340215 3 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=630676fabd0c52e0127f6bdc709db5ba381fcb32e92f55815ea0ac55d2340215 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.bBQ 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.bBQ 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bBQ 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=28c6b21030c8a7230ec90c6ba567adf1 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:12:00.726 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.nLW 00:12:00.727 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 28c6b21030c8a7230ec90c6ba567adf1 1 00:12:00.727 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 28c6b21030c8a7230ec90c6ba567adf1 1 00:12:00.727 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:12:00.727 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:12:00.727 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=28c6b21030c8a7230ec90c6ba567adf1 00:12:00.727 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:12:00.727 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.nLW 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.nLW 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.nLW 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=e8be57c1848eaf516449fb2ee69ca2f24ebda3ff14cecf82 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.4gA 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key e8be57c1848eaf516449fb2ee69ca2f24ebda3ff14cecf82 2 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 e8be57c1848eaf516449fb2ee69ca2f24ebda3ff14cecf82 2 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=e8be57c1848eaf516449fb2ee69ca2f24ebda3ff14cecf82 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.4gA 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.4gA 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.4gA 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=115ed04d3cefb24df94d9f6746754950d472c8877058f263 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.9rR 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 115ed04d3cefb24df94d9f6746754950d472c8877058f263 2 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 115ed04d3cefb24df94d9f6746754950d472c8877058f263 2 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=115ed04d3cefb24df94d9f6746754950d472c8877058f263 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.9rR 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.9rR 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.9rR 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=131f38aae4ad43085dbb473ebbc892d4 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:12:00.986 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.aTk 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 131f38aae4ad43085dbb473ebbc892d4 1 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 131f38aae4ad43085dbb473ebbc892d4 1 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=131f38aae4ad43085dbb473ebbc892d4 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.aTk 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.aTk 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.aTk 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=f11adfb1e21e98692a064700ce347face0e3beaf13917ff8ec01bd085b00eadf 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.AnQ 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key f11adfb1e21e98692a064700ce347face0e3beaf13917ff8ec01bd085b00eadf 3 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 f11adfb1e21e98692a064700ce347face0e3beaf13917ff8ec01bd085b00eadf 3 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=f11adfb1e21e98692a064700ce347face0e3beaf13917ff8ec01bd085b00eadf 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:12:00.987 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.AnQ 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.AnQ 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.AnQ 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76766 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76766 ']' 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.246 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76791 /var/tmp/host.sock 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76791 ']' 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.506 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CFd 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.CFd 00:12:01.765 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.CFd 00:12:02.332 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bBQ ]] 00:12:02.332 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bBQ 00:12:02.332 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.332 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.332 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.332 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bBQ 00:12:02.332 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bBQ 00:12:02.332 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:02.332 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nLW 00:12:02.332 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.332 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.332 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.332 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.nLW 00:12:02.332 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.nLW 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.4gA ]] 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4gA 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4gA 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4gA 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9rR 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.9rR 00:12:02.901 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.9rR 00:12:03.160 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.aTk ]] 00:12:03.160 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aTk 00:12:03.160 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.160 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.419 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.419 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aTk 00:12:03.419 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aTk 00:12:03.678 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:03.678 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.AnQ 00:12:03.678 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.678 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.678 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.678 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.AnQ 00:12:03.678 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.AnQ 00:12:03.936 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:03.937 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:03.937 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.937 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.937 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:03.937 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.199 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.199 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.199 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.199 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.199 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.458 00:12:04.458 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.458 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.458 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.026 { 00:12:05.026 "auth": { 00:12:05.026 "dhgroup": "null", 00:12:05.026 "digest": "sha256", 00:12:05.026 "state": "completed" 00:12:05.026 }, 00:12:05.026 "cntlid": 1, 00:12:05.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:05.026 "listen_address": { 00:12:05.026 "adrfam": "IPv4", 00:12:05.026 "traddr": "10.0.0.2", 00:12:05.026 "trsvcid": "4420", 00:12:05.026 "trtype": "TCP" 00:12:05.026 }, 00:12:05.026 "peer_address": { 00:12:05.026 "adrfam": "IPv4", 00:12:05.026 "traddr": "10.0.0.1", 00:12:05.026 "trsvcid": "40610", 00:12:05.026 "trtype": "TCP" 00:12:05.026 }, 00:12:05.026 "qid": 0, 00:12:05.026 "state": "enabled", 00:12:05.026 "thread": "nvmf_tgt_poll_group_000" 00:12:05.026 } 00:12:05.026 ]' 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.026 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.286 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:05.286 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.559 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.559 00:12:10.559 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.559 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.559 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.873 { 00:12:10.873 "auth": { 00:12:10.873 "dhgroup": "null", 00:12:10.873 "digest": "sha256", 00:12:10.873 "state": "completed" 00:12:10.873 }, 00:12:10.873 "cntlid": 3, 00:12:10.873 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:10.873 "listen_address": { 00:12:10.873 "adrfam": "IPv4", 00:12:10.873 "traddr": "10.0.0.2", 00:12:10.873 "trsvcid": "4420", 00:12:10.873 "trtype": "TCP" 00:12:10.873 }, 00:12:10.873 "peer_address": { 00:12:10.873 "adrfam": "IPv4", 00:12:10.873 "traddr": "10.0.0.1", 00:12:10.873 "trsvcid": "57388", 00:12:10.873 "trtype": "TCP" 00:12:10.873 }, 00:12:10.873 "qid": 0, 00:12:10.873 "state": "enabled", 00:12:10.873 "thread": "nvmf_tgt_poll_group_000" 00:12:10.873 } 00:12:10.873 ]' 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:10.873 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.132 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.132 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.132 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.390 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:11.390 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:11.957 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.958 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:11.958 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.958 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.958 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.958 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.958 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:11.958 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.216 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.784 00:12:12.784 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:12.784 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.784 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.044 { 00:12:13.044 "auth": { 00:12:13.044 "dhgroup": "null", 00:12:13.044 "digest": "sha256", 00:12:13.044 "state": "completed" 00:12:13.044 }, 00:12:13.044 "cntlid": 5, 00:12:13.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:13.044 "listen_address": { 00:12:13.044 "adrfam": "IPv4", 00:12:13.044 "traddr": "10.0.0.2", 00:12:13.044 "trsvcid": "4420", 00:12:13.044 "trtype": "TCP" 00:12:13.044 }, 00:12:13.044 "peer_address": { 00:12:13.044 "adrfam": "IPv4", 00:12:13.044 "traddr": "10.0.0.1", 00:12:13.044 "trsvcid": "57402", 00:12:13.044 "trtype": "TCP" 00:12:13.044 }, 00:12:13.044 "qid": 0, 00:12:13.044 "state": "enabled", 00:12:13.044 "thread": "nvmf_tgt_poll_group_000" 00:12:13.044 } 00:12:13.044 ]' 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:13.044 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.303 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:13.303 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.303 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.303 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.303 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.561 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:13.561 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.496 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:15.064 00:12:15.064 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.064 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.064 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.323 { 00:12:15.323 "auth": { 00:12:15.323 "dhgroup": "null", 00:12:15.323 "digest": "sha256", 00:12:15.323 "state": "completed" 00:12:15.323 }, 00:12:15.323 "cntlid": 7, 00:12:15.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:15.323 "listen_address": { 00:12:15.323 "adrfam": "IPv4", 00:12:15.323 "traddr": "10.0.0.2", 00:12:15.323 "trsvcid": "4420", 00:12:15.323 "trtype": "TCP" 00:12:15.323 }, 00:12:15.323 "peer_address": { 00:12:15.323 "adrfam": "IPv4", 00:12:15.323 "traddr": "10.0.0.1", 00:12:15.323 "trsvcid": "57434", 00:12:15.323 "trtype": "TCP" 00:12:15.323 }, 00:12:15.323 "qid": 0, 00:12:15.323 "state": "enabled", 00:12:15.323 "thread": "nvmf_tgt_poll_group_000" 00:12:15.323 } 00:12:15.323 ]' 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:15.323 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.582 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.582 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.582 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.841 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:15.841 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:16.409 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.977 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.235 00:12:17.235 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.235 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.235 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.495 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.495 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.495 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.495 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.495 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.495 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.495 { 00:12:17.495 "auth": { 00:12:17.495 "dhgroup": "ffdhe2048", 00:12:17.495 "digest": "sha256", 00:12:17.495 "state": "completed" 00:12:17.495 }, 00:12:17.495 "cntlid": 9, 00:12:17.495 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:17.495 "listen_address": { 00:12:17.495 "adrfam": "IPv4", 00:12:17.495 "traddr": "10.0.0.2", 00:12:17.495 "trsvcid": "4420", 00:12:17.495 "trtype": "TCP" 00:12:17.495 }, 00:12:17.495 "peer_address": { 00:12:17.495 "adrfam": "IPv4", 00:12:17.495 "traddr": "10.0.0.1", 00:12:17.495 "trsvcid": "57470", 00:12:17.495 "trtype": "TCP" 00:12:17.495 }, 00:12:17.495 "qid": 0, 00:12:17.495 "state": "enabled", 00:12:17.495 "thread": "nvmf_tgt_poll_group_000" 00:12:17.495 } 00:12:17.495 ]' 00:12:17.495 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.754 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.754 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.754 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.754 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.754 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.754 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.754 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.013 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:18.013 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.949 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.208 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.208 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.208 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.208 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:19.470 00:12:19.470 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.470 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.470 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.731 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.731 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.731 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.731 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.731 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.732 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.732 { 00:12:19.732 "auth": { 00:12:19.732 "dhgroup": "ffdhe2048", 00:12:19.732 "digest": "sha256", 00:12:19.732 "state": "completed" 00:12:19.732 }, 00:12:19.732 "cntlid": 11, 00:12:19.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:19.732 "listen_address": { 00:12:19.732 "adrfam": "IPv4", 00:12:19.732 "traddr": "10.0.0.2", 00:12:19.732 "trsvcid": "4420", 00:12:19.732 "trtype": "TCP" 00:12:19.732 }, 00:12:19.732 "peer_address": { 00:12:19.732 "adrfam": "IPv4", 00:12:19.732 "traddr": "10.0.0.1", 00:12:19.732 "trsvcid": "57498", 00:12:19.732 "trtype": "TCP" 00:12:19.732 }, 00:12:19.732 "qid": 0, 00:12:19.732 "state": "enabled", 00:12:19.732 "thread": "nvmf_tgt_poll_group_000" 00:12:19.732 } 00:12:19.732 ]' 00:12:19.732 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.732 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.732 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.991 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.991 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.991 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.991 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.991 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.250 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:20.250 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:20.816 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.384 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.642 00:12:21.642 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.642 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.642 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.901 { 00:12:21.901 "auth": { 00:12:21.901 "dhgroup": "ffdhe2048", 00:12:21.901 "digest": "sha256", 00:12:21.901 "state": "completed" 00:12:21.901 }, 00:12:21.901 "cntlid": 13, 00:12:21.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:21.901 "listen_address": { 00:12:21.901 "adrfam": "IPv4", 00:12:21.901 "traddr": "10.0.0.2", 00:12:21.901 "trsvcid": "4420", 00:12:21.901 "trtype": "TCP" 00:12:21.901 }, 00:12:21.901 "peer_address": { 00:12:21.901 "adrfam": "IPv4", 00:12:21.901 "traddr": "10.0.0.1", 00:12:21.901 "trsvcid": "33538", 00:12:21.901 "trtype": "TCP" 00:12:21.901 }, 00:12:21.901 "qid": 0, 00:12:21.901 "state": "enabled", 00:12:21.901 "thread": "nvmf_tgt_poll_group_000" 00:12:21.901 } 00:12:21.901 ]' 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.901 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.160 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:22.160 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.160 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.160 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.160 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.419 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:22.419 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:22.986 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.986 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:22.986 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.986 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.987 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.987 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.987 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:22.987 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.554 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:23.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.555 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.813 00:12:23.813 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.813 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.813 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.072 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.072 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.072 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.072 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.072 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.072 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.072 { 00:12:24.072 "auth": { 00:12:24.072 "dhgroup": "ffdhe2048", 00:12:24.072 "digest": "sha256", 00:12:24.072 "state": "completed" 00:12:24.072 }, 00:12:24.072 "cntlid": 15, 00:12:24.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:24.072 "listen_address": { 00:12:24.072 "adrfam": "IPv4", 00:12:24.072 "traddr": "10.0.0.2", 00:12:24.072 "trsvcid": "4420", 00:12:24.072 "trtype": "TCP" 00:12:24.072 }, 00:12:24.072 "peer_address": { 00:12:24.072 "adrfam": "IPv4", 00:12:24.072 "traddr": "10.0.0.1", 00:12:24.072 "trsvcid": "33560", 00:12:24.072 "trtype": "TCP" 00:12:24.073 }, 00:12:24.073 "qid": 0, 00:12:24.073 "state": "enabled", 00:12:24.073 "thread": "nvmf_tgt_poll_group_000" 00:12:24.073 } 00:12:24.073 ]' 00:12:24.073 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.333 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.333 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.333 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:24.333 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.333 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.333 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.333 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.592 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:24.592 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:25.160 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.160 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:25.160 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.160 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.419 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.419 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:25.419 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.419 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:25.419 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:25.677 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.678 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.937 00:12:25.937 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.937 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.937 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.505 { 00:12:26.505 "auth": { 00:12:26.505 "dhgroup": "ffdhe3072", 00:12:26.505 "digest": "sha256", 00:12:26.505 "state": "completed" 00:12:26.505 }, 00:12:26.505 "cntlid": 17, 00:12:26.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:26.505 "listen_address": { 00:12:26.505 "adrfam": "IPv4", 00:12:26.505 "traddr": "10.0.0.2", 00:12:26.505 "trsvcid": "4420", 00:12:26.505 "trtype": "TCP" 00:12:26.505 }, 00:12:26.505 "peer_address": { 00:12:26.505 "adrfam": "IPv4", 00:12:26.505 "traddr": "10.0.0.1", 00:12:26.505 "trsvcid": "33594", 00:12:26.505 "trtype": "TCP" 00:12:26.505 }, 00:12:26.505 "qid": 0, 00:12:26.505 "state": "enabled", 00:12:26.505 "thread": "nvmf_tgt_poll_group_000" 00:12:26.505 } 00:12:26.505 ]' 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.505 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.764 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:26.764 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.701 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.960 00:12:28.219 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.219 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.219 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.478 { 00:12:28.478 "auth": { 00:12:28.478 "dhgroup": "ffdhe3072", 00:12:28.478 "digest": "sha256", 00:12:28.478 "state": "completed" 00:12:28.478 }, 00:12:28.478 "cntlid": 19, 00:12:28.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:28.478 "listen_address": { 00:12:28.478 "adrfam": "IPv4", 00:12:28.478 "traddr": "10.0.0.2", 00:12:28.478 "trsvcid": "4420", 00:12:28.478 "trtype": "TCP" 00:12:28.478 }, 00:12:28.478 "peer_address": { 00:12:28.478 "adrfam": "IPv4", 00:12:28.478 "traddr": "10.0.0.1", 00:12:28.478 "trsvcid": "33614", 00:12:28.478 "trtype": "TCP" 00:12:28.478 }, 00:12:28.478 "qid": 0, 00:12:28.478 "state": "enabled", 00:12:28.478 "thread": "nvmf_tgt_poll_group_000" 00:12:28.478 } 00:12:28.478 ]' 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.478 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.738 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:28.738 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.674 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:29.674 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.933 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:30.192 00:12:30.451 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.451 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.451 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.710 { 00:12:30.710 "auth": { 00:12:30.710 "dhgroup": "ffdhe3072", 00:12:30.710 "digest": "sha256", 00:12:30.710 "state": "completed" 00:12:30.710 }, 00:12:30.710 "cntlid": 21, 00:12:30.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:30.710 "listen_address": { 00:12:30.710 "adrfam": "IPv4", 00:12:30.710 "traddr": "10.0.0.2", 00:12:30.710 "trsvcid": "4420", 00:12:30.710 "trtype": "TCP" 00:12:30.710 }, 00:12:30.710 "peer_address": { 00:12:30.710 "adrfam": "IPv4", 00:12:30.710 "traddr": "10.0.0.1", 00:12:30.710 "trsvcid": "47554", 00:12:30.710 "trtype": "TCP" 00:12:30.710 }, 00:12:30.710 "qid": 0, 00:12:30.710 "state": "enabled", 00:12:30.710 "thread": "nvmf_tgt_poll_group_000" 00:12:30.710 } 00:12:30.710 ]' 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.710 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.969 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:30.969 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:31.906 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.165 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:32.424 00:12:32.424 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.424 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.424 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.057 { 00:12:33.057 "auth": { 00:12:33.057 "dhgroup": "ffdhe3072", 00:12:33.057 "digest": "sha256", 00:12:33.057 "state": "completed" 00:12:33.057 }, 00:12:33.057 "cntlid": 23, 00:12:33.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:33.057 "listen_address": { 00:12:33.057 "adrfam": "IPv4", 00:12:33.057 "traddr": "10.0.0.2", 00:12:33.057 "trsvcid": "4420", 00:12:33.057 "trtype": "TCP" 00:12:33.057 }, 00:12:33.057 "peer_address": { 00:12:33.057 "adrfam": "IPv4", 00:12:33.057 "traddr": "10.0.0.1", 00:12:33.057 "trsvcid": "47590", 00:12:33.057 "trtype": "TCP" 00:12:33.057 }, 00:12:33.057 "qid": 0, 00:12:33.057 "state": "enabled", 00:12:33.057 "thread": "nvmf_tgt_poll_group_000" 00:12:33.057 } 00:12:33.057 ]' 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.057 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.330 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:33.330 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:34.265 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.265 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.524 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.524 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.524 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.524 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:34.783 00:12:34.783 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.783 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.783 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.041 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.041 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.041 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.041 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.041 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.041 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.041 { 00:12:35.041 "auth": { 00:12:35.041 "dhgroup": "ffdhe4096", 00:12:35.041 "digest": "sha256", 00:12:35.041 "state": "completed" 00:12:35.041 }, 00:12:35.041 "cntlid": 25, 00:12:35.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:35.041 "listen_address": { 00:12:35.041 "adrfam": "IPv4", 00:12:35.041 "traddr": "10.0.0.2", 00:12:35.041 "trsvcid": "4420", 00:12:35.041 "trtype": "TCP" 00:12:35.041 }, 00:12:35.041 "peer_address": { 00:12:35.041 "adrfam": "IPv4", 00:12:35.041 "traddr": "10.0.0.1", 00:12:35.041 "trsvcid": "47628", 00:12:35.041 "trtype": "TCP" 00:12:35.041 }, 00:12:35.041 "qid": 0, 00:12:35.041 "state": "enabled", 00:12:35.041 "thread": "nvmf_tgt_poll_group_000" 00:12:35.041 } 00:12:35.041 ]' 00:12:35.041 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.301 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.301 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.301 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:35.301 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.301 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.301 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.301 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.560 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:35.560 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:36.497 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:36.757 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:37.016 00:12:37.274 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.275 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.275 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.537 { 00:12:37.537 "auth": { 00:12:37.537 "dhgroup": "ffdhe4096", 00:12:37.537 "digest": "sha256", 00:12:37.537 "state": "completed" 00:12:37.537 }, 00:12:37.537 "cntlid": 27, 00:12:37.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:37.537 "listen_address": { 00:12:37.537 "adrfam": "IPv4", 00:12:37.537 "traddr": "10.0.0.2", 00:12:37.537 "trsvcid": "4420", 00:12:37.537 "trtype": "TCP" 00:12:37.537 }, 00:12:37.537 "peer_address": { 00:12:37.537 "adrfam": "IPv4", 00:12:37.537 "traddr": "10.0.0.1", 00:12:37.537 "trsvcid": "47658", 00:12:37.537 "trtype": "TCP" 00:12:37.537 }, 00:12:37.537 "qid": 0, 00:12:37.537 "state": "enabled", 00:12:37.537 "thread": "nvmf_tgt_poll_group_000" 00:12:37.537 } 00:12:37.537 ]' 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.537 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.103 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:38.103 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:38.671 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:38.930 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:39.189 00:12:39.189 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.189 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.189 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.785 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.785 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.785 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.785 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.785 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.785 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.785 { 00:12:39.786 "auth": { 00:12:39.786 "dhgroup": "ffdhe4096", 00:12:39.786 "digest": "sha256", 00:12:39.786 "state": "completed" 00:12:39.786 }, 00:12:39.786 "cntlid": 29, 00:12:39.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:39.786 "listen_address": { 00:12:39.786 "adrfam": "IPv4", 00:12:39.786 "traddr": "10.0.0.2", 00:12:39.786 "trsvcid": "4420", 00:12:39.786 "trtype": "TCP" 00:12:39.786 }, 00:12:39.786 "peer_address": { 00:12:39.786 "adrfam": "IPv4", 00:12:39.786 "traddr": "10.0.0.1", 00:12:39.786 "trsvcid": "47690", 00:12:39.786 "trtype": "TCP" 00:12:39.786 }, 00:12:39.786 "qid": 0, 00:12:39.786 "state": "enabled", 00:12:39.786 "thread": "nvmf_tgt_poll_group_000" 00:12:39.786 } 00:12:39.786 ]' 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.786 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.058 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:40.058 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:40.624 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.883 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:40.883 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.883 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.883 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.883 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.883 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:40.883 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.140 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.399 00:12:41.399 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.399 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.399 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.966 { 00:12:41.966 "auth": { 00:12:41.966 "dhgroup": "ffdhe4096", 00:12:41.966 "digest": "sha256", 00:12:41.966 "state": "completed" 00:12:41.966 }, 00:12:41.966 "cntlid": 31, 00:12:41.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:41.966 "listen_address": { 00:12:41.966 "adrfam": "IPv4", 00:12:41.966 "traddr": "10.0.0.2", 00:12:41.966 "trsvcid": "4420", 00:12:41.966 "trtype": "TCP" 00:12:41.966 }, 00:12:41.966 "peer_address": { 00:12:41.966 "adrfam": "IPv4", 00:12:41.966 "traddr": "10.0.0.1", 00:12:41.966 "trsvcid": "58286", 00:12:41.966 "trtype": "TCP" 00:12:41.966 }, 00:12:41.966 "qid": 0, 00:12:41.966 "state": "enabled", 00:12:41.966 "thread": "nvmf_tgt_poll_group_000" 00:12:41.966 } 00:12:41.966 ]' 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.966 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:41.967 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.967 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:41.967 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.967 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.967 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.967 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.225 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:42.225 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:43.160 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.418 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.677 00:12:43.935 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.935 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.935 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.193 { 00:12:44.193 "auth": { 00:12:44.193 "dhgroup": "ffdhe6144", 00:12:44.193 "digest": "sha256", 00:12:44.193 "state": "completed" 00:12:44.193 }, 00:12:44.193 "cntlid": 33, 00:12:44.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:44.193 "listen_address": { 00:12:44.193 "adrfam": "IPv4", 00:12:44.193 "traddr": "10.0.0.2", 00:12:44.193 "trsvcid": "4420", 00:12:44.193 "trtype": "TCP" 00:12:44.193 }, 00:12:44.193 "peer_address": { 00:12:44.193 "adrfam": "IPv4", 00:12:44.193 "traddr": "10.0.0.1", 00:12:44.193 "trsvcid": "58328", 00:12:44.193 "trtype": "TCP" 00:12:44.193 }, 00:12:44.193 "qid": 0, 00:12:44.193 "state": "enabled", 00:12:44.193 "thread": "nvmf_tgt_poll_group_000" 00:12:44.193 } 00:12:44.193 ]' 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.193 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.193 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:44.193 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.193 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.193 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.193 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.761 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:44.761 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:45.329 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.588 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:46.154 00:12:46.155 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.155 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.155 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.413 { 00:12:46.413 "auth": { 00:12:46.413 "dhgroup": "ffdhe6144", 00:12:46.413 "digest": "sha256", 00:12:46.413 "state": "completed" 00:12:46.413 }, 00:12:46.413 "cntlid": 35, 00:12:46.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:46.413 "listen_address": { 00:12:46.413 "adrfam": "IPv4", 00:12:46.413 "traddr": "10.0.0.2", 00:12:46.413 "trsvcid": "4420", 00:12:46.413 "trtype": "TCP" 00:12:46.413 }, 00:12:46.413 "peer_address": { 00:12:46.413 "adrfam": "IPv4", 00:12:46.413 "traddr": "10.0.0.1", 00:12:46.413 "trsvcid": "58348", 00:12:46.413 "trtype": "TCP" 00:12:46.413 }, 00:12:46.413 "qid": 0, 00:12:46.413 "state": "enabled", 00:12:46.413 "thread": "nvmf_tgt_poll_group_000" 00:12:46.413 } 00:12:46.413 ]' 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:46.413 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.672 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.672 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.672 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.980 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:46.980 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:47.546 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:47.804 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:47.804 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.804 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:47.804 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:47.804 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:47.804 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.804 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:47.805 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.805 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.062 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.063 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.063 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.063 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:48.628 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.628 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.628 { 00:12:48.628 "auth": { 00:12:48.628 "dhgroup": "ffdhe6144", 00:12:48.628 "digest": "sha256", 00:12:48.628 "state": "completed" 00:12:48.628 }, 00:12:48.628 "cntlid": 37, 00:12:48.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:48.628 "listen_address": { 00:12:48.628 "adrfam": "IPv4", 00:12:48.628 "traddr": "10.0.0.2", 00:12:48.628 "trsvcid": "4420", 00:12:48.628 "trtype": "TCP" 00:12:48.628 }, 00:12:48.628 "peer_address": { 00:12:48.628 "adrfam": "IPv4", 00:12:48.628 "traddr": "10.0.0.1", 00:12:48.628 "trsvcid": "58370", 00:12:48.628 "trtype": "TCP" 00:12:48.628 }, 00:12:48.628 "qid": 0, 00:12:48.628 "state": "enabled", 00:12:48.628 "thread": "nvmf_tgt_poll_group_000" 00:12:48.628 } 00:12:48.628 ]' 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.886 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.143 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:49.143 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:50.076 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.334 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:50.900 00:12:50.900 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.900 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.900 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.158 { 00:12:51.158 "auth": { 00:12:51.158 "dhgroup": "ffdhe6144", 00:12:51.158 "digest": "sha256", 00:12:51.158 "state": "completed" 00:12:51.158 }, 00:12:51.158 "cntlid": 39, 00:12:51.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:51.158 "listen_address": { 00:12:51.158 "adrfam": "IPv4", 00:12:51.158 "traddr": "10.0.0.2", 00:12:51.158 "trsvcid": "4420", 00:12:51.158 "trtype": "TCP" 00:12:51.158 }, 00:12:51.158 "peer_address": { 00:12:51.158 "adrfam": "IPv4", 00:12:51.158 "traddr": "10.0.0.1", 00:12:51.158 "trsvcid": "43110", 00:12:51.158 "trtype": "TCP" 00:12:51.158 }, 00:12:51.158 "qid": 0, 00:12:51.158 "state": "enabled", 00:12:51.158 "thread": "nvmf_tgt_poll_group_000" 00:12:51.158 } 00:12:51.158 ]' 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:51.158 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.158 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.159 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.159 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.724 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:51.724 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:12:52.291 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:52.291 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:52.549 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:53.115 00:12:53.115 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.115 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.115 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.682 { 00:12:53.682 "auth": { 00:12:53.682 "dhgroup": "ffdhe8192", 00:12:53.682 "digest": "sha256", 00:12:53.682 "state": "completed" 00:12:53.682 }, 00:12:53.682 "cntlid": 41, 00:12:53.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:53.682 "listen_address": { 00:12:53.682 "adrfam": "IPv4", 00:12:53.682 "traddr": "10.0.0.2", 00:12:53.682 "trsvcid": "4420", 00:12:53.682 "trtype": "TCP" 00:12:53.682 }, 00:12:53.682 "peer_address": { 00:12:53.682 "adrfam": "IPv4", 00:12:53.682 "traddr": "10.0.0.1", 00:12:53.682 "trsvcid": "43120", 00:12:53.682 "trtype": "TCP" 00:12:53.682 }, 00:12:53.682 "qid": 0, 00:12:53.682 "state": "enabled", 00:12:53.682 "thread": "nvmf_tgt_poll_group_000" 00:12:53.682 } 00:12:53.682 ]' 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.682 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.941 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:53.941 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:12:54.510 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.768 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:54.768 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.768 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.768 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.768 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.768 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:54.768 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.027 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:55.593 00:12:55.593 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.593 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.593 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.851 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.851 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.851 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.852 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.852 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.852 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.852 { 00:12:55.852 "auth": { 00:12:55.852 "dhgroup": "ffdhe8192", 00:12:55.852 "digest": "sha256", 00:12:55.852 "state": "completed" 00:12:55.852 }, 00:12:55.852 "cntlid": 43, 00:12:55.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:55.852 "listen_address": { 00:12:55.852 "adrfam": "IPv4", 00:12:55.852 "traddr": "10.0.0.2", 00:12:55.852 "trsvcid": "4420", 00:12:55.852 "trtype": "TCP" 00:12:55.852 }, 00:12:55.852 "peer_address": { 00:12:55.852 "adrfam": "IPv4", 00:12:55.852 "traddr": "10.0.0.1", 00:12:55.852 "trsvcid": "43158", 00:12:55.852 "trtype": "TCP" 00:12:55.852 }, 00:12:55.852 "qid": 0, 00:12:55.852 "state": "enabled", 00:12:55.852 "thread": "nvmf_tgt_poll_group_000" 00:12:55.852 } 00:12:55.852 ]' 00:12:55.852 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.852 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.852 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.111 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:56.111 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.111 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.111 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.111 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.370 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:56.370 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:56.937 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:57.196 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.130 00:12:58.130 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.130 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.130 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.389 { 00:12:58.389 "auth": { 00:12:58.389 "dhgroup": "ffdhe8192", 00:12:58.389 "digest": "sha256", 00:12:58.389 "state": "completed" 00:12:58.389 }, 00:12:58.389 "cntlid": 45, 00:12:58.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:12:58.389 "listen_address": { 00:12:58.389 "adrfam": "IPv4", 00:12:58.389 "traddr": "10.0.0.2", 00:12:58.389 "trsvcid": "4420", 00:12:58.389 "trtype": "TCP" 00:12:58.389 }, 00:12:58.389 "peer_address": { 00:12:58.389 "adrfam": "IPv4", 00:12:58.389 "traddr": "10.0.0.1", 00:12:58.389 "trsvcid": "43176", 00:12:58.389 "trtype": "TCP" 00:12:58.389 }, 00:12:58.389 "qid": 0, 00:12:58.389 "state": "enabled", 00:12:58.389 "thread": "nvmf_tgt_poll_group_000" 00:12:58.389 } 00:12:58.389 ]' 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.389 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.956 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:58.956 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:59.524 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:59.783 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.351 00:13:00.609 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.609 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.609 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.867 { 00:13:00.867 "auth": { 00:13:00.867 "dhgroup": "ffdhe8192", 00:13:00.867 "digest": "sha256", 00:13:00.867 "state": "completed" 00:13:00.867 }, 00:13:00.867 "cntlid": 47, 00:13:00.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:00.867 "listen_address": { 00:13:00.867 "adrfam": "IPv4", 00:13:00.867 "traddr": "10.0.0.2", 00:13:00.867 "trsvcid": "4420", 00:13:00.867 "trtype": "TCP" 00:13:00.867 }, 00:13:00.867 "peer_address": { 00:13:00.867 "adrfam": "IPv4", 00:13:00.867 "traddr": "10.0.0.1", 00:13:00.867 "trsvcid": "39520", 00:13:00.867 "trtype": "TCP" 00:13:00.867 }, 00:13:00.867 "qid": 0, 00:13:00.867 "state": "enabled", 00:13:00.867 "thread": "nvmf_tgt_poll_group_000" 00:13:00.867 } 00:13:00.867 ]' 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.867 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.136 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:01.136 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.067 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.325 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.326 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.326 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.326 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.583 00:13:02.583 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.583 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.583 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.841 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.841 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.841 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.841 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.841 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.841 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.841 { 00:13:02.841 "auth": { 00:13:02.841 "dhgroup": "null", 00:13:02.841 "digest": "sha384", 00:13:02.841 "state": "completed" 00:13:02.841 }, 00:13:02.841 "cntlid": 49, 00:13:02.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:02.841 "listen_address": { 00:13:02.841 "adrfam": "IPv4", 00:13:02.841 "traddr": "10.0.0.2", 00:13:02.841 "trsvcid": "4420", 00:13:02.841 "trtype": "TCP" 00:13:02.841 }, 00:13:02.841 "peer_address": { 00:13:02.841 "adrfam": "IPv4", 00:13:02.841 "traddr": "10.0.0.1", 00:13:02.841 "trsvcid": "39552", 00:13:02.841 "trtype": "TCP" 00:13:02.841 }, 00:13:02.841 "qid": 0, 00:13:02.841 "state": "enabled", 00:13:02.841 "thread": "nvmf_tgt_poll_group_000" 00:13:02.841 } 00:13:02.841 ]' 00:13:02.841 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.098 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:03.098 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.098 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:03.098 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.098 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.098 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.098 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.356 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:03.357 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:03.923 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.182 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.440 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.440 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.440 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.440 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.699 00:13:04.699 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.699 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.699 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.957 { 00:13:04.957 "auth": { 00:13:04.957 "dhgroup": "null", 00:13:04.957 "digest": "sha384", 00:13:04.957 "state": "completed" 00:13:04.957 }, 00:13:04.957 "cntlid": 51, 00:13:04.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:04.957 "listen_address": { 00:13:04.957 "adrfam": "IPv4", 00:13:04.957 "traddr": "10.0.0.2", 00:13:04.957 "trsvcid": "4420", 00:13:04.957 "trtype": "TCP" 00:13:04.957 }, 00:13:04.957 "peer_address": { 00:13:04.957 "adrfam": "IPv4", 00:13:04.957 "traddr": "10.0.0.1", 00:13:04.957 "trsvcid": "39574", 00:13:04.957 "trtype": "TCP" 00:13:04.957 }, 00:13:04.957 "qid": 0, 00:13:04.957 "state": "enabled", 00:13:04.957 "thread": "nvmf_tgt_poll_group_000" 00:13:04.957 } 00:13:04.957 ]' 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.957 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.216 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:05.216 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.216 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.216 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.216 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.474 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:05.474 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:06.040 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.040 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:06.040 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.040 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.040 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.041 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.041 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:06.041 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.299 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.866 00:13:06.866 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.866 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.866 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.125 { 00:13:07.125 "auth": { 00:13:07.125 "dhgroup": "null", 00:13:07.125 "digest": "sha384", 00:13:07.125 "state": "completed" 00:13:07.125 }, 00:13:07.125 "cntlid": 53, 00:13:07.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:07.125 "listen_address": { 00:13:07.125 "adrfam": "IPv4", 00:13:07.125 "traddr": "10.0.0.2", 00:13:07.125 "trsvcid": "4420", 00:13:07.125 "trtype": "TCP" 00:13:07.125 }, 00:13:07.125 "peer_address": { 00:13:07.125 "adrfam": "IPv4", 00:13:07.125 "traddr": "10.0.0.1", 00:13:07.125 "trsvcid": "39610", 00:13:07.125 "trtype": "TCP" 00:13:07.125 }, 00:13:07.125 "qid": 0, 00:13:07.125 "state": "enabled", 00:13:07.125 "thread": "nvmf_tgt_poll_group_000" 00:13:07.125 } 00:13:07.125 ]' 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.125 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.383 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:07.383 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:07.950 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.209 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.576 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.576 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.576 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.834 00:13:08.834 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.834 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.834 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.093 { 00:13:09.093 "auth": { 00:13:09.093 "dhgroup": "null", 00:13:09.093 "digest": "sha384", 00:13:09.093 "state": "completed" 00:13:09.093 }, 00:13:09.093 "cntlid": 55, 00:13:09.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:09.093 "listen_address": { 00:13:09.093 "adrfam": "IPv4", 00:13:09.093 "traddr": "10.0.0.2", 00:13:09.093 "trsvcid": "4420", 00:13:09.093 "trtype": "TCP" 00:13:09.093 }, 00:13:09.093 "peer_address": { 00:13:09.093 "adrfam": "IPv4", 00:13:09.093 "traddr": "10.0.0.1", 00:13:09.093 "trsvcid": "39648", 00:13:09.093 "trtype": "TCP" 00:13:09.093 }, 00:13:09.093 "qid": 0, 00:13:09.093 "state": "enabled", 00:13:09.093 "thread": "nvmf_tgt_poll_group_000" 00:13:09.093 } 00:13:09.093 ]' 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.093 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.657 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:09.657 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.224 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.481 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.739 00:13:10.739 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.739 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.739 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.306 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.306 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.306 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.306 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.306 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.306 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.306 { 00:13:11.306 "auth": { 00:13:11.306 "dhgroup": "ffdhe2048", 00:13:11.306 "digest": "sha384", 00:13:11.306 "state": "completed" 00:13:11.306 }, 00:13:11.306 "cntlid": 57, 00:13:11.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:11.306 "listen_address": { 00:13:11.306 "adrfam": "IPv4", 00:13:11.306 "traddr": "10.0.0.2", 00:13:11.306 "trsvcid": "4420", 00:13:11.306 "trtype": "TCP" 00:13:11.306 }, 00:13:11.306 "peer_address": { 00:13:11.306 "adrfam": "IPv4", 00:13:11.306 "traddr": "10.0.0.1", 00:13:11.306 "trsvcid": "60810", 00:13:11.306 "trtype": "TCP" 00:13:11.306 }, 00:13:11.306 "qid": 0, 00:13:11.306 "state": "enabled", 00:13:11.306 "thread": "nvmf_tgt_poll_group_000" 00:13:11.306 } 00:13:11.306 ]' 00:13:11.306 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.306 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.306 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.306 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.306 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.306 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.306 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.306 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.567 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:11.567 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:12.134 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.393 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.393 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:12.393 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.393 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.393 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.393 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.393 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:12.393 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.652 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.910 00:13:12.910 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.910 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.910 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.477 { 00:13:13.477 "auth": { 00:13:13.477 "dhgroup": "ffdhe2048", 00:13:13.477 "digest": "sha384", 00:13:13.477 "state": "completed" 00:13:13.477 }, 00:13:13.477 "cntlid": 59, 00:13:13.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:13.477 "listen_address": { 00:13:13.477 "adrfam": "IPv4", 00:13:13.477 "traddr": "10.0.0.2", 00:13:13.477 "trsvcid": "4420", 00:13:13.477 "trtype": "TCP" 00:13:13.477 }, 00:13:13.477 "peer_address": { 00:13:13.477 "adrfam": "IPv4", 00:13:13.477 "traddr": "10.0.0.1", 00:13:13.477 "trsvcid": "60842", 00:13:13.477 "trtype": "TCP" 00:13:13.477 }, 00:13:13.477 "qid": 0, 00:13:13.477 "state": "enabled", 00:13:13.477 "thread": "nvmf_tgt_poll_group_000" 00:13:13.477 } 00:13:13.477 ]' 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.477 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.736 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:13.737 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:14.672 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.673 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:15.240 00:13:15.240 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.240 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.240 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.503 { 00:13:15.503 "auth": { 00:13:15.503 "dhgroup": "ffdhe2048", 00:13:15.503 "digest": "sha384", 00:13:15.503 "state": "completed" 00:13:15.503 }, 00:13:15.503 "cntlid": 61, 00:13:15.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:15.503 "listen_address": { 00:13:15.503 "adrfam": "IPv4", 00:13:15.503 "traddr": "10.0.0.2", 00:13:15.503 "trsvcid": "4420", 00:13:15.503 "trtype": "TCP" 00:13:15.503 }, 00:13:15.503 "peer_address": { 00:13:15.503 "adrfam": "IPv4", 00:13:15.503 "traddr": "10.0.0.1", 00:13:15.503 "trsvcid": "60874", 00:13:15.503 "trtype": "TCP" 00:13:15.503 }, 00:13:15.503 "qid": 0, 00:13:15.503 "state": "enabled", 00:13:15.503 "thread": "nvmf_tgt_poll_group_000" 00:13:15.503 } 00:13:15.503 ]' 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.503 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.774 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:15.774 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.774 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.774 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.774 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.033 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:16.033 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:16.598 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.598 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:16.599 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.599 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.599 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.599 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.599 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:16.599 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.166 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.424 00:13:17.424 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.424 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.425 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.683 { 00:13:17.683 "auth": { 00:13:17.683 "dhgroup": "ffdhe2048", 00:13:17.683 "digest": "sha384", 00:13:17.683 "state": "completed" 00:13:17.683 }, 00:13:17.683 "cntlid": 63, 00:13:17.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:17.683 "listen_address": { 00:13:17.683 "adrfam": "IPv4", 00:13:17.683 "traddr": "10.0.0.2", 00:13:17.683 "trsvcid": "4420", 00:13:17.683 "trtype": "TCP" 00:13:17.683 }, 00:13:17.683 "peer_address": { 00:13:17.683 "adrfam": "IPv4", 00:13:17.683 "traddr": "10.0.0.1", 00:13:17.683 "trsvcid": "60902", 00:13:17.683 "trtype": "TCP" 00:13:17.683 }, 00:13:17.683 "qid": 0, 00:13:17.683 "state": "enabled", 00:13:17.683 "thread": "nvmf_tgt_poll_group_000" 00:13:17.683 } 00:13:17.683 ]' 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.683 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.942 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:17.942 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.942 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.942 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.942 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.200 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:18.200 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:18.767 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.333 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.591 00:13:19.591 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.591 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.591 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.850 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.850 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.850 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.850 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.850 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.850 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.850 { 00:13:19.850 "auth": { 00:13:19.850 "dhgroup": "ffdhe3072", 00:13:19.850 "digest": "sha384", 00:13:19.850 "state": "completed" 00:13:19.850 }, 00:13:19.850 "cntlid": 65, 00:13:19.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:19.850 "listen_address": { 00:13:19.850 "adrfam": "IPv4", 00:13:19.850 "traddr": "10.0.0.2", 00:13:19.850 "trsvcid": "4420", 00:13:19.850 "trtype": "TCP" 00:13:19.850 }, 00:13:19.850 "peer_address": { 00:13:19.850 "adrfam": "IPv4", 00:13:19.850 "traddr": "10.0.0.1", 00:13:19.850 "trsvcid": "60932", 00:13:19.850 "trtype": "TCP" 00:13:19.850 }, 00:13:19.850 "qid": 0, 00:13:19.850 "state": "enabled", 00:13:19.850 "thread": "nvmf_tgt_poll_group_000" 00:13:19.850 } 00:13:19.850 ]' 00:13:19.850 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.109 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.109 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.109 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:20.109 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.109 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.109 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.109 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.367 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:20.367 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:21.301 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.301 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.560 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.560 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.560 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.560 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.818 00:13:21.818 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.818 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.818 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.076 { 00:13:22.076 "auth": { 00:13:22.076 "dhgroup": "ffdhe3072", 00:13:22.076 "digest": "sha384", 00:13:22.076 "state": "completed" 00:13:22.076 }, 00:13:22.076 "cntlid": 67, 00:13:22.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:22.076 "listen_address": { 00:13:22.076 "adrfam": "IPv4", 00:13:22.076 "traddr": "10.0.0.2", 00:13:22.076 "trsvcid": "4420", 00:13:22.076 "trtype": "TCP" 00:13:22.076 }, 00:13:22.076 "peer_address": { 00:13:22.076 "adrfam": "IPv4", 00:13:22.076 "traddr": "10.0.0.1", 00:13:22.076 "trsvcid": "47322", 00:13:22.076 "trtype": "TCP" 00:13:22.076 }, 00:13:22.076 "qid": 0, 00:13:22.076 "state": "enabled", 00:13:22.076 "thread": "nvmf_tgt_poll_group_000" 00:13:22.076 } 00:13:22.076 ]' 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.076 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.335 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:22.335 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.335 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.335 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.335 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.594 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:22.594 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:23.200 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.479 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:24.046 00:13:24.046 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.046 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.046 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.303 { 00:13:24.303 "auth": { 00:13:24.303 "dhgroup": "ffdhe3072", 00:13:24.303 "digest": "sha384", 00:13:24.303 "state": "completed" 00:13:24.303 }, 00:13:24.303 "cntlid": 69, 00:13:24.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:24.303 "listen_address": { 00:13:24.303 "adrfam": "IPv4", 00:13:24.303 "traddr": "10.0.0.2", 00:13:24.303 "trsvcid": "4420", 00:13:24.303 "trtype": "TCP" 00:13:24.303 }, 00:13:24.303 "peer_address": { 00:13:24.303 "adrfam": "IPv4", 00:13:24.303 "traddr": "10.0.0.1", 00:13:24.303 "trsvcid": "47340", 00:13:24.303 "trtype": "TCP" 00:13:24.303 }, 00:13:24.303 "qid": 0, 00:13:24.303 "state": "enabled", 00:13:24.303 "thread": "nvmf_tgt_poll_group_000" 00:13:24.303 } 00:13:24.303 ]' 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.303 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.561 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:24.561 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:25.496 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.497 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:13:25.497 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.497 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.497 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.497 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:25.497 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.497 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.063 00:13:26.063 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.063 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.063 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.322 { 00:13:26.322 "auth": { 00:13:26.322 "dhgroup": "ffdhe3072", 00:13:26.322 "digest": "sha384", 00:13:26.322 "state": "completed" 00:13:26.322 }, 00:13:26.322 "cntlid": 71, 00:13:26.322 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:26.322 "listen_address": { 00:13:26.322 "adrfam": "IPv4", 00:13:26.322 "traddr": "10.0.0.2", 00:13:26.322 "trsvcid": "4420", 00:13:26.322 "trtype": "TCP" 00:13:26.322 }, 00:13:26.322 "peer_address": { 00:13:26.322 "adrfam": "IPv4", 00:13:26.322 "traddr": "10.0.0.1", 00:13:26.322 "trsvcid": "47358", 00:13:26.322 "trtype": "TCP" 00:13:26.322 }, 00:13:26.322 "qid": 0, 00:13:26.322 "state": "enabled", 00:13:26.322 "thread": "nvmf_tgt_poll_group_000" 00:13:26.322 } 00:13:26.322 ]' 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.322 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.888 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:26.888 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:27.454 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:27.455 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.713 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.278 00:13:28.279 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.279 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.279 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.537 { 00:13:28.537 "auth": { 00:13:28.537 "dhgroup": "ffdhe4096", 00:13:28.537 "digest": "sha384", 00:13:28.537 "state": "completed" 00:13:28.537 }, 00:13:28.537 "cntlid": 73, 00:13:28.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:28.537 "listen_address": { 00:13:28.537 "adrfam": "IPv4", 00:13:28.537 "traddr": "10.0.0.2", 00:13:28.537 "trsvcid": "4420", 00:13:28.537 "trtype": "TCP" 00:13:28.537 }, 00:13:28.537 "peer_address": { 00:13:28.537 "adrfam": "IPv4", 00:13:28.537 "traddr": "10.0.0.1", 00:13:28.537 "trsvcid": "47384", 00:13:28.537 "trtype": "TCP" 00:13:28.537 }, 00:13:28.537 "qid": 0, 00:13:28.537 "state": "enabled", 00:13:28.537 "thread": "nvmf_tgt_poll_group_000" 00:13:28.537 } 00:13:28.537 ]' 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.537 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.795 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:28.795 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:29.730 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.988 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.246 00:13:30.246 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.246 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.246 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.506 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.507 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.507 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.507 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.507 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.507 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.507 { 00:13:30.507 "auth": { 00:13:30.507 "dhgroup": "ffdhe4096", 00:13:30.507 "digest": "sha384", 00:13:30.507 "state": "completed" 00:13:30.507 }, 00:13:30.507 "cntlid": 75, 00:13:30.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:30.507 "listen_address": { 00:13:30.507 "adrfam": "IPv4", 00:13:30.507 "traddr": "10.0.0.2", 00:13:30.507 "trsvcid": "4420", 00:13:30.507 "trtype": "TCP" 00:13:30.507 }, 00:13:30.507 "peer_address": { 00:13:30.507 "adrfam": "IPv4", 00:13:30.507 "traddr": "10.0.0.1", 00:13:30.507 "trsvcid": "54316", 00:13:30.507 "trtype": "TCP" 00:13:30.507 }, 00:13:30.507 "qid": 0, 00:13:30.507 "state": "enabled", 00:13:30.507 "thread": "nvmf_tgt_poll_group_000" 00:13:30.507 } 00:13:30.507 ]' 00:13:30.507 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.768 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.768 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.768 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:30.768 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.768 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.768 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.768 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.027 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:31.027 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.962 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.530 00:13:32.530 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.530 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.530 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.788 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.788 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.788 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.788 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.788 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.788 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.788 { 00:13:32.788 "auth": { 00:13:32.788 "dhgroup": "ffdhe4096", 00:13:32.788 "digest": "sha384", 00:13:32.788 "state": "completed" 00:13:32.788 }, 00:13:32.788 "cntlid": 77, 00:13:32.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:32.789 "listen_address": { 00:13:32.789 "adrfam": "IPv4", 00:13:32.789 "traddr": "10.0.0.2", 00:13:32.789 "trsvcid": "4420", 00:13:32.789 "trtype": "TCP" 00:13:32.789 }, 00:13:32.789 "peer_address": { 00:13:32.789 "adrfam": "IPv4", 00:13:32.789 "traddr": "10.0.0.1", 00:13:32.789 "trsvcid": "54336", 00:13:32.789 "trtype": "TCP" 00:13:32.789 }, 00:13:32.789 "qid": 0, 00:13:32.789 "state": "enabled", 00:13:32.789 "thread": "nvmf_tgt_poll_group_000" 00:13:32.789 } 00:13:32.789 ]' 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.789 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.047 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:33.047 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.983 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.550 00:13:34.550 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.550 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.550 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.809 { 00:13:34.809 "auth": { 00:13:34.809 "dhgroup": "ffdhe4096", 00:13:34.809 "digest": "sha384", 00:13:34.809 "state": "completed" 00:13:34.809 }, 00:13:34.809 "cntlid": 79, 00:13:34.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:34.809 "listen_address": { 00:13:34.809 "adrfam": "IPv4", 00:13:34.809 "traddr": "10.0.0.2", 00:13:34.809 "trsvcid": "4420", 00:13:34.809 "trtype": "TCP" 00:13:34.809 }, 00:13:34.809 "peer_address": { 00:13:34.809 "adrfam": "IPv4", 00:13:34.809 "traddr": "10.0.0.1", 00:13:34.809 "trsvcid": "54362", 00:13:34.809 "trtype": "TCP" 00:13:34.809 }, 00:13:34.809 "qid": 0, 00:13:34.809 "state": "enabled", 00:13:34.809 "thread": "nvmf_tgt_poll_group_000" 00:13:34.809 } 00:13:34.809 ]' 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.809 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.068 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.068 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.068 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.327 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:35.327 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:35.894 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.153 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.758 00:13:36.758 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.758 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.758 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.015 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.015 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.015 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.015 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.015 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.015 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.015 { 00:13:37.015 "auth": { 00:13:37.015 "dhgroup": "ffdhe6144", 00:13:37.015 "digest": "sha384", 00:13:37.015 "state": "completed" 00:13:37.015 }, 00:13:37.015 "cntlid": 81, 00:13:37.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:37.015 "listen_address": { 00:13:37.015 "adrfam": "IPv4", 00:13:37.015 "traddr": "10.0.0.2", 00:13:37.015 "trsvcid": "4420", 00:13:37.015 "trtype": "TCP" 00:13:37.015 }, 00:13:37.015 "peer_address": { 00:13:37.015 "adrfam": "IPv4", 00:13:37.015 "traddr": "10.0.0.1", 00:13:37.015 "trsvcid": "54374", 00:13:37.015 "trtype": "TCP" 00:13:37.015 }, 00:13:37.015 "qid": 0, 00:13:37.015 "state": "enabled", 00:13:37.015 "thread": "nvmf_tgt_poll_group_000" 00:13:37.015 } 00:13:37.015 ]' 00:13:37.015 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.273 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.273 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.273 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.273 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.273 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.273 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.273 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.531 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:37.531 09:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:38.467 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.725 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.983 00:13:39.243 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.243 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.243 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.503 { 00:13:39.503 "auth": { 00:13:39.503 "dhgroup": "ffdhe6144", 00:13:39.503 "digest": "sha384", 00:13:39.503 "state": "completed" 00:13:39.503 }, 00:13:39.503 "cntlid": 83, 00:13:39.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:39.503 "listen_address": { 00:13:39.503 "adrfam": "IPv4", 00:13:39.503 "traddr": "10.0.0.2", 00:13:39.503 "trsvcid": "4420", 00:13:39.503 "trtype": "TCP" 00:13:39.503 }, 00:13:39.503 "peer_address": { 00:13:39.503 "adrfam": "IPv4", 00:13:39.503 "traddr": "10.0.0.1", 00:13:39.503 "trsvcid": "54394", 00:13:39.503 "trtype": "TCP" 00:13:39.503 }, 00:13:39.503 "qid": 0, 00:13:39.503 "state": "enabled", 00:13:39.503 "thread": "nvmf_tgt_poll_group_000" 00:13:39.503 } 00:13:39.503 ]' 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.503 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.071 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:40.071 09:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:40.639 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.898 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.465 00:13:41.465 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.465 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.465 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.724 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.725 { 00:13:41.725 "auth": { 00:13:41.725 "dhgroup": "ffdhe6144", 00:13:41.725 "digest": "sha384", 00:13:41.725 "state": "completed" 00:13:41.725 }, 00:13:41.725 "cntlid": 85, 00:13:41.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:41.725 "listen_address": { 00:13:41.725 "adrfam": "IPv4", 00:13:41.725 "traddr": "10.0.0.2", 00:13:41.725 "trsvcid": "4420", 00:13:41.725 "trtype": "TCP" 00:13:41.725 }, 00:13:41.725 "peer_address": { 00:13:41.725 "adrfam": "IPv4", 00:13:41.725 "traddr": "10.0.0.1", 00:13:41.725 "trsvcid": "50250", 00:13:41.725 "trtype": "TCP" 00:13:41.725 }, 00:13:41.725 "qid": 0, 00:13:41.725 "state": "enabled", 00:13:41.725 "thread": "nvmf_tgt_poll_group_000" 00:13:41.725 } 00:13:41.725 ]' 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.725 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.984 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:41.984 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:42.943 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.201 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.768 00:13:43.768 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.768 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.768 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.026 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.027 { 00:13:44.027 "auth": { 00:13:44.027 "dhgroup": "ffdhe6144", 00:13:44.027 "digest": "sha384", 00:13:44.027 "state": "completed" 00:13:44.027 }, 00:13:44.027 "cntlid": 87, 00:13:44.027 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:44.027 "listen_address": { 00:13:44.027 "adrfam": "IPv4", 00:13:44.027 "traddr": "10.0.0.2", 00:13:44.027 "trsvcid": "4420", 00:13:44.027 "trtype": "TCP" 00:13:44.027 }, 00:13:44.027 "peer_address": { 00:13:44.027 "adrfam": "IPv4", 00:13:44.027 "traddr": "10.0.0.1", 00:13:44.027 "trsvcid": "50274", 00:13:44.027 "trtype": "TCP" 00:13:44.027 }, 00:13:44.027 "qid": 0, 00:13:44.027 "state": "enabled", 00:13:44.027 "thread": "nvmf_tgt_poll_group_000" 00:13:44.027 } 00:13:44.027 ]' 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.027 09:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.595 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:44.595 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:45.161 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.420 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.357 00:13:46.357 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.357 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.357 09:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.357 { 00:13:46.357 "auth": { 00:13:46.357 "dhgroup": "ffdhe8192", 00:13:46.357 "digest": "sha384", 00:13:46.357 "state": "completed" 00:13:46.357 }, 00:13:46.357 "cntlid": 89, 00:13:46.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:46.357 "listen_address": { 00:13:46.357 "adrfam": "IPv4", 00:13:46.357 "traddr": "10.0.0.2", 00:13:46.357 "trsvcid": "4420", 00:13:46.357 "trtype": "TCP" 00:13:46.357 }, 00:13:46.357 "peer_address": { 00:13:46.357 "adrfam": "IPv4", 00:13:46.357 "traddr": "10.0.0.1", 00:13:46.357 "trsvcid": "50302", 00:13:46.357 "trtype": "TCP" 00:13:46.357 }, 00:13:46.357 "qid": 0, 00:13:46.357 "state": "enabled", 00:13:46.357 "thread": "nvmf_tgt_poll_group_000" 00:13:46.357 } 00:13:46.357 ]' 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.357 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.616 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:46.616 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.616 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.616 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.616 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.875 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:46.875 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:47.442 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.701 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.268 00:13:48.268 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.268 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.268 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.836 { 00:13:48.836 "auth": { 00:13:48.836 "dhgroup": "ffdhe8192", 00:13:48.836 "digest": "sha384", 00:13:48.836 "state": "completed" 00:13:48.836 }, 00:13:48.836 "cntlid": 91, 00:13:48.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:48.836 "listen_address": { 00:13:48.836 "adrfam": "IPv4", 00:13:48.836 "traddr": "10.0.0.2", 00:13:48.836 "trsvcid": "4420", 00:13:48.836 "trtype": "TCP" 00:13:48.836 }, 00:13:48.836 "peer_address": { 00:13:48.836 "adrfam": "IPv4", 00:13:48.836 "traddr": "10.0.0.1", 00:13:48.836 "trsvcid": "50338", 00:13:48.836 "trtype": "TCP" 00:13:48.836 }, 00:13:48.836 "qid": 0, 00:13:48.836 "state": "enabled", 00:13:48.836 "thread": "nvmf_tgt_poll_group_000" 00:13:48.836 } 00:13:48.836 ]' 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.836 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.094 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:49.094 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.038 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.039 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.016 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.016 { 00:13:51.016 "auth": { 00:13:51.016 "dhgroup": "ffdhe8192", 00:13:51.016 "digest": "sha384", 00:13:51.016 "state": "completed" 00:13:51.016 }, 00:13:51.016 "cntlid": 93, 00:13:51.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:51.016 "listen_address": { 00:13:51.016 "adrfam": "IPv4", 00:13:51.016 "traddr": "10.0.0.2", 00:13:51.016 "trsvcid": "4420", 00:13:51.016 "trtype": "TCP" 00:13:51.016 }, 00:13:51.016 "peer_address": { 00:13:51.016 "adrfam": "IPv4", 00:13:51.016 "traddr": "10.0.0.1", 00:13:51.016 "trsvcid": "58226", 00:13:51.016 "trtype": "TCP" 00:13:51.016 }, 00:13:51.016 "qid": 0, 00:13:51.016 "state": "enabled", 00:13:51.016 "thread": "nvmf_tgt_poll_group_000" 00:13:51.016 } 00:13:51.016 ]' 00:13:51.016 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.275 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.275 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.275 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:51.275 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.275 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.275 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.275 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.534 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:51.534 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:52.104 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.363 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.931 00:13:53.190 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.190 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.190 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.449 { 00:13:53.449 "auth": { 00:13:53.449 "dhgroup": "ffdhe8192", 00:13:53.449 "digest": "sha384", 00:13:53.449 "state": "completed" 00:13:53.449 }, 00:13:53.449 "cntlid": 95, 00:13:53.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:53.449 "listen_address": { 00:13:53.449 "adrfam": "IPv4", 00:13:53.449 "traddr": "10.0.0.2", 00:13:53.449 "trsvcid": "4420", 00:13:53.449 "trtype": "TCP" 00:13:53.449 }, 00:13:53.449 "peer_address": { 00:13:53.449 "adrfam": "IPv4", 00:13:53.449 "traddr": "10.0.0.1", 00:13:53.449 "trsvcid": "58252", 00:13:53.449 "trtype": "TCP" 00:13:53.449 }, 00:13:53.449 "qid": 0, 00:13:53.449 "state": "enabled", 00:13:53.449 "thread": "nvmf_tgt_poll_group_000" 00:13:53.449 } 00:13:53.449 ]' 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.449 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.707 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:53.707 09:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.644 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.903 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.903 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.903 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:54.903 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.162 00:13:55.162 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.162 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.162 09:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.420 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.420 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.420 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.420 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.420 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.421 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.421 { 00:13:55.421 "auth": { 00:13:55.421 "dhgroup": "null", 00:13:55.421 "digest": "sha512", 00:13:55.421 "state": "completed" 00:13:55.421 }, 00:13:55.421 "cntlid": 97, 00:13:55.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:55.421 "listen_address": { 00:13:55.421 "adrfam": "IPv4", 00:13:55.421 "traddr": "10.0.0.2", 00:13:55.421 "trsvcid": "4420", 00:13:55.421 "trtype": "TCP" 00:13:55.421 }, 00:13:55.421 "peer_address": { 00:13:55.421 "adrfam": "IPv4", 00:13:55.421 "traddr": "10.0.0.1", 00:13:55.421 "trsvcid": "58290", 00:13:55.421 "trtype": "TCP" 00:13:55.421 }, 00:13:55.421 "qid": 0, 00:13:55.421 "state": "enabled", 00:13:55.421 "thread": "nvmf_tgt_poll_group_000" 00:13:55.421 } 00:13:55.421 ]' 00:13:55.421 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.421 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.421 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.679 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:55.679 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.679 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.679 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.679 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.938 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:55.938 09:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:56.505 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.072 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.073 09:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.331 00:13:57.331 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.331 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.331 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.589 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.590 { 00:13:57.590 "auth": { 00:13:57.590 "dhgroup": "null", 00:13:57.590 "digest": "sha512", 00:13:57.590 "state": "completed" 00:13:57.590 }, 00:13:57.590 "cntlid": 99, 00:13:57.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:57.590 "listen_address": { 00:13:57.590 "adrfam": "IPv4", 00:13:57.590 "traddr": "10.0.0.2", 00:13:57.590 "trsvcid": "4420", 00:13:57.590 "trtype": "TCP" 00:13:57.590 }, 00:13:57.590 "peer_address": { 00:13:57.590 "adrfam": "IPv4", 00:13:57.590 "traddr": "10.0.0.1", 00:13:57.590 "trsvcid": "58310", 00:13:57.590 "trtype": "TCP" 00:13:57.590 }, 00:13:57.590 "qid": 0, 00:13:57.590 "state": "enabled", 00:13:57.590 "thread": "nvmf_tgt_poll_group_000" 00:13:57.590 } 00:13:57.590 ]' 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:57.590 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.848 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:57.848 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.848 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.848 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.848 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.107 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:58.107 09:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:58.675 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.244 09:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.508 00:13:59.508 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.508 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.508 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.768 { 00:13:59.768 "auth": { 00:13:59.768 "dhgroup": "null", 00:13:59.768 "digest": "sha512", 00:13:59.768 "state": "completed" 00:13:59.768 }, 00:13:59.768 "cntlid": 101, 00:13:59.768 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:13:59.768 "listen_address": { 00:13:59.768 "adrfam": "IPv4", 00:13:59.768 "traddr": "10.0.0.2", 00:13:59.768 "trsvcid": "4420", 00:13:59.768 "trtype": "TCP" 00:13:59.768 }, 00:13:59.768 "peer_address": { 00:13:59.768 "adrfam": "IPv4", 00:13:59.768 "traddr": "10.0.0.1", 00:13:59.768 "trsvcid": "58326", 00:13:59.768 "trtype": "TCP" 00:13:59.768 }, 00:13:59.768 "qid": 0, 00:13:59.768 "state": "enabled", 00:13:59.768 "thread": "nvmf_tgt_poll_group_000" 00:13:59.768 } 00:13:59.768 ]' 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:59.768 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.027 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.027 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.027 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.285 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:00.285 09:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:00.851 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.109 09:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.109 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.109 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.109 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.109 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.676 00:14:01.676 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.676 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.676 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.935 { 00:14:01.935 "auth": { 00:14:01.935 "dhgroup": "null", 00:14:01.935 "digest": "sha512", 00:14:01.935 "state": "completed" 00:14:01.935 }, 00:14:01.935 "cntlid": 103, 00:14:01.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:01.935 "listen_address": { 00:14:01.935 "adrfam": "IPv4", 00:14:01.935 "traddr": "10.0.0.2", 00:14:01.935 "trsvcid": "4420", 00:14:01.935 "trtype": "TCP" 00:14:01.935 }, 00:14:01.935 "peer_address": { 00:14:01.935 "adrfam": "IPv4", 00:14:01.935 "traddr": "10.0.0.1", 00:14:01.935 "trsvcid": "53952", 00:14:01.935 "trtype": "TCP" 00:14:01.935 }, 00:14:01.935 "qid": 0, 00:14:01.935 "state": "enabled", 00:14:01.935 "thread": "nvmf_tgt_poll_group_000" 00:14:01.935 } 00:14:01.935 ]' 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.935 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.936 09:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.503 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:02.503 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:03.070 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.328 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.896 00:14:03.896 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.896 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.896 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.153 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.153 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.153 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.153 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.153 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.153 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.153 { 00:14:04.153 "auth": { 00:14:04.153 "dhgroup": "ffdhe2048", 00:14:04.153 "digest": "sha512", 00:14:04.153 "state": "completed" 00:14:04.153 }, 00:14:04.153 "cntlid": 105, 00:14:04.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:04.153 "listen_address": { 00:14:04.153 "adrfam": "IPv4", 00:14:04.153 "traddr": "10.0.0.2", 00:14:04.153 "trsvcid": "4420", 00:14:04.153 "trtype": "TCP" 00:14:04.153 }, 00:14:04.153 "peer_address": { 00:14:04.153 "adrfam": "IPv4", 00:14:04.153 "traddr": "10.0.0.1", 00:14:04.153 "trsvcid": "53984", 00:14:04.153 "trtype": "TCP" 00:14:04.153 }, 00:14:04.153 "qid": 0, 00:14:04.154 "state": "enabled", 00:14:04.154 "thread": "nvmf_tgt_poll_group_000" 00:14:04.154 } 00:14:04.154 ]' 00:14:04.154 09:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.154 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:04.154 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.412 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.412 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.412 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.412 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.412 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.671 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:04.672 09:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.609 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.868 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.868 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.868 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.868 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.127 00:14:06.127 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.127 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.127 09:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.385 { 00:14:06.385 "auth": { 00:14:06.385 "dhgroup": "ffdhe2048", 00:14:06.385 "digest": "sha512", 00:14:06.385 "state": "completed" 00:14:06.385 }, 00:14:06.385 "cntlid": 107, 00:14:06.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:06.385 "listen_address": { 00:14:06.385 "adrfam": "IPv4", 00:14:06.385 "traddr": "10.0.0.2", 00:14:06.385 "trsvcid": "4420", 00:14:06.385 "trtype": "TCP" 00:14:06.385 }, 00:14:06.385 "peer_address": { 00:14:06.385 "adrfam": "IPv4", 00:14:06.385 "traddr": "10.0.0.1", 00:14:06.385 "trsvcid": "53998", 00:14:06.385 "trtype": "TCP" 00:14:06.385 }, 00:14:06.385 "qid": 0, 00:14:06.385 "state": "enabled", 00:14:06.385 "thread": "nvmf_tgt_poll_group_000" 00:14:06.385 } 00:14:06.385 ]' 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.385 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.644 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.644 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.644 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.644 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.644 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.902 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:06.902 09:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:07.839 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.098 09:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.368 00:14:08.368 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.368 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.368 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.642 { 00:14:08.642 "auth": { 00:14:08.642 "dhgroup": "ffdhe2048", 00:14:08.642 "digest": "sha512", 00:14:08.642 "state": "completed" 00:14:08.642 }, 00:14:08.642 "cntlid": 109, 00:14:08.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:08.642 "listen_address": { 00:14:08.642 "adrfam": "IPv4", 00:14:08.642 "traddr": "10.0.0.2", 00:14:08.642 "trsvcid": "4420", 00:14:08.642 "trtype": "TCP" 00:14:08.642 }, 00:14:08.642 "peer_address": { 00:14:08.642 "adrfam": "IPv4", 00:14:08.642 "traddr": "10.0.0.1", 00:14:08.642 "trsvcid": "54038", 00:14:08.642 "trtype": "TCP" 00:14:08.642 }, 00:14:08.642 "qid": 0, 00:14:08.642 "state": "enabled", 00:14:08.642 "thread": "nvmf_tgt_poll_group_000" 00:14:08.642 } 00:14:08.642 ]' 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.642 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.900 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:08.900 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.900 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.900 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.900 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.159 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:09.159 09:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:10.096 09:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.096 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.355 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.355 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.356 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.356 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.614 00:14:10.614 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.614 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.614 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.181 { 00:14:11.181 "auth": { 00:14:11.181 "dhgroup": "ffdhe2048", 00:14:11.181 "digest": "sha512", 00:14:11.181 "state": "completed" 00:14:11.181 }, 00:14:11.181 "cntlid": 111, 00:14:11.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:11.181 "listen_address": { 00:14:11.181 "adrfam": "IPv4", 00:14:11.181 "traddr": "10.0.0.2", 00:14:11.181 "trsvcid": "4420", 00:14:11.181 "trtype": "TCP" 00:14:11.181 }, 00:14:11.181 "peer_address": { 00:14:11.181 "adrfam": "IPv4", 00:14:11.181 "traddr": "10.0.0.1", 00:14:11.181 "trsvcid": "35626", 00:14:11.181 "trtype": "TCP" 00:14:11.181 }, 00:14:11.181 "qid": 0, 00:14:11.181 "state": "enabled", 00:14:11.181 "thread": "nvmf_tgt_poll_group_000" 00:14:11.181 } 00:14:11.181 ]' 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:11.181 09:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.181 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.181 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.181 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.440 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:11.440 09:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:12.375 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.635 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:13.202 00:14:13.202 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.202 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.202 09:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.461 { 00:14:13.461 "auth": { 00:14:13.461 "dhgroup": "ffdhe3072", 00:14:13.461 "digest": "sha512", 00:14:13.461 "state": "completed" 00:14:13.461 }, 00:14:13.461 "cntlid": 113, 00:14:13.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:13.461 "listen_address": { 00:14:13.461 "adrfam": "IPv4", 00:14:13.461 "traddr": "10.0.0.2", 00:14:13.461 "trsvcid": "4420", 00:14:13.461 "trtype": "TCP" 00:14:13.461 }, 00:14:13.461 "peer_address": { 00:14:13.461 "adrfam": "IPv4", 00:14:13.461 "traddr": "10.0.0.1", 00:14:13.461 "trsvcid": "35660", 00:14:13.461 "trtype": "TCP" 00:14:13.461 }, 00:14:13.461 "qid": 0, 00:14:13.461 "state": "enabled", 00:14:13.461 "thread": "nvmf_tgt_poll_group_000" 00:14:13.461 } 00:14:13.461 ]' 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.461 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:14.029 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:14.029 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:14.597 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.597 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:14.597 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.597 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.856 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.856 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.856 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:14.856 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.116 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:15.375 00:14:15.633 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.633 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.633 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.892 { 00:14:15.892 "auth": { 00:14:15.892 "dhgroup": "ffdhe3072", 00:14:15.892 "digest": "sha512", 00:14:15.892 "state": "completed" 00:14:15.892 }, 00:14:15.892 "cntlid": 115, 00:14:15.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:15.892 "listen_address": { 00:14:15.892 "adrfam": "IPv4", 00:14:15.892 "traddr": "10.0.0.2", 00:14:15.892 "trsvcid": "4420", 00:14:15.892 "trtype": "TCP" 00:14:15.892 }, 00:14:15.892 "peer_address": { 00:14:15.892 "adrfam": "IPv4", 00:14:15.892 "traddr": "10.0.0.1", 00:14:15.892 "trsvcid": "35696", 00:14:15.892 "trtype": "TCP" 00:14:15.892 }, 00:14:15.892 "qid": 0, 00:14:15.892 "state": "enabled", 00:14:15.892 "thread": "nvmf_tgt_poll_group_000" 00:14:15.892 } 00:14:15.892 ]' 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.892 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.151 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.151 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.151 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.410 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:16.410 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:17.346 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.347 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:17.347 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.347 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.347 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.347 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.347 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:17.347 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.605 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.864 00:14:17.864 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.864 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.864 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.127 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.128 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.128 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.128 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.128 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.128 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.128 { 00:14:18.128 "auth": { 00:14:18.128 "dhgroup": "ffdhe3072", 00:14:18.128 "digest": "sha512", 00:14:18.128 "state": "completed" 00:14:18.128 }, 00:14:18.128 "cntlid": 117, 00:14:18.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:18.128 "listen_address": { 00:14:18.128 "adrfam": "IPv4", 00:14:18.128 "traddr": "10.0.0.2", 00:14:18.128 "trsvcid": "4420", 00:14:18.128 "trtype": "TCP" 00:14:18.128 }, 00:14:18.128 "peer_address": { 00:14:18.128 "adrfam": "IPv4", 00:14:18.128 "traddr": "10.0.0.1", 00:14:18.128 "trsvcid": "35726", 00:14:18.128 "trtype": "TCP" 00:14:18.128 }, 00:14:18.128 "qid": 0, 00:14:18.128 "state": "enabled", 00:14:18.128 "thread": "nvmf_tgt_poll_group_000" 00:14:18.128 } 00:14:18.128 ]' 00:14:18.128 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.399 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.399 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.399 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:18.399 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.399 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.399 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.399 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.657 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:18.657 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:19.225 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.484 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:19.484 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.484 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.484 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.484 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.484 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:19.484 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:19.744 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.003 00:14:20.261 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.261 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.261 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.519 { 00:14:20.519 "auth": { 00:14:20.519 "dhgroup": "ffdhe3072", 00:14:20.519 "digest": "sha512", 00:14:20.519 "state": "completed" 00:14:20.519 }, 00:14:20.519 "cntlid": 119, 00:14:20.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:20.519 "listen_address": { 00:14:20.519 "adrfam": "IPv4", 00:14:20.519 "traddr": "10.0.0.2", 00:14:20.519 "trsvcid": "4420", 00:14:20.519 "trtype": "TCP" 00:14:20.519 }, 00:14:20.519 "peer_address": { 00:14:20.519 "adrfam": "IPv4", 00:14:20.519 "traddr": "10.0.0.1", 00:14:20.519 "trsvcid": "57158", 00:14:20.519 "trtype": "TCP" 00:14:20.519 }, 00:14:20.519 "qid": 0, 00:14:20.519 "state": "enabled", 00:14:20.519 "thread": "nvmf_tgt_poll_group_000" 00:14:20.519 } 00:14:20.519 ]' 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.519 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.086 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:21.086 09:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:21.654 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.222 09:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.481 00:14:22.481 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.481 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.481 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.048 { 00:14:23.048 "auth": { 00:14:23.048 "dhgroup": "ffdhe4096", 00:14:23.048 "digest": "sha512", 00:14:23.048 "state": "completed" 00:14:23.048 }, 00:14:23.048 "cntlid": 121, 00:14:23.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:23.048 "listen_address": { 00:14:23.048 "adrfam": "IPv4", 00:14:23.048 "traddr": "10.0.0.2", 00:14:23.048 "trsvcid": "4420", 00:14:23.048 "trtype": "TCP" 00:14:23.048 }, 00:14:23.048 "peer_address": { 00:14:23.048 "adrfam": "IPv4", 00:14:23.048 "traddr": "10.0.0.1", 00:14:23.048 "trsvcid": "57180", 00:14:23.048 "trtype": "TCP" 00:14:23.048 }, 00:14:23.048 "qid": 0, 00:14:23.048 "state": "enabled", 00:14:23.048 "thread": "nvmf_tgt_poll_group_000" 00:14:23.048 } 00:14:23.048 ]' 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.048 09:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.307 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:23.307 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:24.243 09:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.502 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.069 00:14:25.069 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.069 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.069 09:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.327 { 00:14:25.327 "auth": { 00:14:25.327 "dhgroup": "ffdhe4096", 00:14:25.327 "digest": "sha512", 00:14:25.327 "state": "completed" 00:14:25.327 }, 00:14:25.327 "cntlid": 123, 00:14:25.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:25.327 "listen_address": { 00:14:25.327 "adrfam": "IPv4", 00:14:25.327 "traddr": "10.0.0.2", 00:14:25.327 "trsvcid": "4420", 00:14:25.327 "trtype": "TCP" 00:14:25.327 }, 00:14:25.327 "peer_address": { 00:14:25.327 "adrfam": "IPv4", 00:14:25.327 "traddr": "10.0.0.1", 00:14:25.327 "trsvcid": "57204", 00:14:25.327 "trtype": "TCP" 00:14:25.327 }, 00:14:25.327 "qid": 0, 00:14:25.327 "state": "enabled", 00:14:25.327 "thread": "nvmf_tgt_poll_group_000" 00:14:25.327 } 00:14:25.327 ]' 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.327 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.892 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:25.892 09:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:26.493 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.750 09:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.315 00:14:27.315 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.315 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.315 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.573 { 00:14:27.573 "auth": { 00:14:27.573 "dhgroup": "ffdhe4096", 00:14:27.573 "digest": "sha512", 00:14:27.573 "state": "completed" 00:14:27.573 }, 00:14:27.573 "cntlid": 125, 00:14:27.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:27.573 "listen_address": { 00:14:27.573 "adrfam": "IPv4", 00:14:27.573 "traddr": "10.0.0.2", 00:14:27.573 "trsvcid": "4420", 00:14:27.573 "trtype": "TCP" 00:14:27.573 }, 00:14:27.573 "peer_address": { 00:14:27.573 "adrfam": "IPv4", 00:14:27.573 "traddr": "10.0.0.1", 00:14:27.573 "trsvcid": "57226", 00:14:27.573 "trtype": "TCP" 00:14:27.573 }, 00:14:27.573 "qid": 0, 00:14:27.573 "state": "enabled", 00:14:27.573 "thread": "nvmf_tgt_poll_group_000" 00:14:27.573 } 00:14:27.573 ]' 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:27.573 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.831 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.831 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.831 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.831 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.831 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.090 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:28.090 09:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:29.024 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.282 09:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.540 00:14:29.540 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.540 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.540 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.105 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.105 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.106 { 00:14:30.106 "auth": { 00:14:30.106 "dhgroup": "ffdhe4096", 00:14:30.106 "digest": "sha512", 00:14:30.106 "state": "completed" 00:14:30.106 }, 00:14:30.106 "cntlid": 127, 00:14:30.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:30.106 "listen_address": { 00:14:30.106 "adrfam": "IPv4", 00:14:30.106 "traddr": "10.0.0.2", 00:14:30.106 "trsvcid": "4420", 00:14:30.106 "trtype": "TCP" 00:14:30.106 }, 00:14:30.106 "peer_address": { 00:14:30.106 "adrfam": "IPv4", 00:14:30.106 "traddr": "10.0.0.1", 00:14:30.106 "trsvcid": "57254", 00:14:30.106 "trtype": "TCP" 00:14:30.106 }, 00:14:30.106 "qid": 0, 00:14:30.106 "state": "enabled", 00:14:30.106 "thread": "nvmf_tgt_poll_group_000" 00:14:30.106 } 00:14:30.106 ]' 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.106 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.364 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:30.364 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:31.298 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.298 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.863 00:14:31.863 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.863 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.863 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.429 { 00:14:32.429 "auth": { 00:14:32.429 "dhgroup": "ffdhe6144", 00:14:32.429 "digest": "sha512", 00:14:32.429 "state": "completed" 00:14:32.429 }, 00:14:32.429 "cntlid": 129, 00:14:32.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:32.429 "listen_address": { 00:14:32.429 "adrfam": "IPv4", 00:14:32.429 "traddr": "10.0.0.2", 00:14:32.429 "trsvcid": "4420", 00:14:32.429 "trtype": "TCP" 00:14:32.429 }, 00:14:32.429 "peer_address": { 00:14:32.429 "adrfam": "IPv4", 00:14:32.429 "traddr": "10.0.0.1", 00:14:32.429 "trsvcid": "35314", 00:14:32.429 "trtype": "TCP" 00:14:32.429 }, 00:14:32.429 "qid": 0, 00:14:32.429 "state": "enabled", 00:14:32.429 "thread": "nvmf_tgt_poll_group_000" 00:14:32.429 } 00:14:32.429 ]' 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.429 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.995 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:32.995 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:33.561 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.128 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.387 00:14:34.646 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:34.646 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.646 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:34.905 { 00:14:34.905 "auth": { 00:14:34.905 "dhgroup": "ffdhe6144", 00:14:34.905 "digest": "sha512", 00:14:34.905 "state": "completed" 00:14:34.905 }, 00:14:34.905 "cntlid": 131, 00:14:34.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:34.905 "listen_address": { 00:14:34.905 "adrfam": "IPv4", 00:14:34.905 "traddr": "10.0.0.2", 00:14:34.905 "trsvcid": "4420", 00:14:34.905 "trtype": "TCP" 00:14:34.905 }, 00:14:34.905 "peer_address": { 00:14:34.905 "adrfam": "IPv4", 00:14:34.905 "traddr": "10.0.0.1", 00:14:34.905 "trsvcid": "35338", 00:14:34.905 "trtype": "TCP" 00:14:34.905 }, 00:14:34.905 "qid": 0, 00:14:34.905 "state": "enabled", 00:14:34.905 "thread": "nvmf_tgt_poll_group_000" 00:14:34.905 } 00:14:34.905 ]' 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:34.905 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.163 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.163 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.163 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.431 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:35.431 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.001 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:36.567 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:36.567 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.567 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:36.567 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:36.567 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:36.567 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.567 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.568 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.568 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.568 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.568 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.568 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.568 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.136 00:14:37.136 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.136 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.136 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.395 { 00:14:37.395 "auth": { 00:14:37.395 "dhgroup": "ffdhe6144", 00:14:37.395 "digest": "sha512", 00:14:37.395 "state": "completed" 00:14:37.395 }, 00:14:37.395 "cntlid": 133, 00:14:37.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:37.395 "listen_address": { 00:14:37.395 "adrfam": "IPv4", 00:14:37.395 "traddr": "10.0.0.2", 00:14:37.395 "trsvcid": "4420", 00:14:37.395 "trtype": "TCP" 00:14:37.395 }, 00:14:37.395 "peer_address": { 00:14:37.395 "adrfam": "IPv4", 00:14:37.395 "traddr": "10.0.0.1", 00:14:37.395 "trsvcid": "35352", 00:14:37.395 "trtype": "TCP" 00:14:37.395 }, 00:14:37.395 "qid": 0, 00:14:37.395 "state": "enabled", 00:14:37.395 "thread": "nvmf_tgt_poll_group_000" 00:14:37.395 } 00:14:37.395 ]' 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:37.395 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.654 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.654 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.654 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.654 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.654 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.913 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:37.913 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:38.849 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:39.108 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:39.676 00:14:39.676 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.676 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.676 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.933 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.933 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.933 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.933 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.933 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.933 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.933 { 00:14:39.933 "auth": { 00:14:39.933 "dhgroup": "ffdhe6144", 00:14:39.933 "digest": "sha512", 00:14:39.933 "state": "completed" 00:14:39.933 }, 00:14:39.933 "cntlid": 135, 00:14:39.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:39.933 "listen_address": { 00:14:39.933 "adrfam": "IPv4", 00:14:39.933 "traddr": "10.0.0.2", 00:14:39.933 "trsvcid": "4420", 00:14:39.933 "trtype": "TCP" 00:14:39.933 }, 00:14:39.933 "peer_address": { 00:14:39.933 "adrfam": "IPv4", 00:14:39.933 "traddr": "10.0.0.1", 00:14:39.933 "trsvcid": "35376", 00:14:39.933 "trtype": "TCP" 00:14:39.933 }, 00:14:39.933 "qid": 0, 00:14:39.933 "state": "enabled", 00:14:39.933 "thread": "nvmf_tgt_poll_group_000" 00:14:39.933 } 00:14:39.933 ]' 00:14:39.933 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.191 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.191 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.192 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:40.192 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.192 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.192 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.192 09:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.450 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:40.450 09:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:41.388 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.648 09:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.215 00:14:42.215 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.215 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.215 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.474 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.474 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.474 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.474 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.474 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.474 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.474 { 00:14:42.474 "auth": { 00:14:42.474 "dhgroup": "ffdhe8192", 00:14:42.474 "digest": "sha512", 00:14:42.474 "state": "completed" 00:14:42.474 }, 00:14:42.474 "cntlid": 137, 00:14:42.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:42.474 "listen_address": { 00:14:42.474 "adrfam": "IPv4", 00:14:42.474 "traddr": "10.0.0.2", 00:14:42.474 "trsvcid": "4420", 00:14:42.474 "trtype": "TCP" 00:14:42.474 }, 00:14:42.474 "peer_address": { 00:14:42.474 "adrfam": "IPv4", 00:14:42.474 "traddr": "10.0.0.1", 00:14:42.474 "trsvcid": "37262", 00:14:42.474 "trtype": "TCP" 00:14:42.474 }, 00:14:42.474 "qid": 0, 00:14:42.474 "state": "enabled", 00:14:42.474 "thread": "nvmf_tgt_poll_group_000" 00:14:42.474 } 00:14:42.474 ]' 00:14:42.474 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.733 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.733 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.733 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.733 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.733 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.733 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.733 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.991 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:42.992 09:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:43.927 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.186 09:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.756 00:14:44.756 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.756 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.756 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.028 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.028 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.028 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.028 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.028 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.028 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.028 { 00:14:45.028 "auth": { 00:14:45.028 "dhgroup": "ffdhe8192", 00:14:45.028 "digest": "sha512", 00:14:45.028 "state": "completed" 00:14:45.028 }, 00:14:45.028 "cntlid": 139, 00:14:45.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:45.028 "listen_address": { 00:14:45.028 "adrfam": "IPv4", 00:14:45.028 "traddr": "10.0.0.2", 00:14:45.028 "trsvcid": "4420", 00:14:45.028 "trtype": "TCP" 00:14:45.028 }, 00:14:45.028 "peer_address": { 00:14:45.028 "adrfam": "IPv4", 00:14:45.028 "traddr": "10.0.0.1", 00:14:45.028 "trsvcid": "37272", 00:14:45.028 "trtype": "TCP" 00:14:45.028 }, 00:14:45.028 "qid": 0, 00:14:45.028 "state": "enabled", 00:14:45.028 "thread": "nvmf_tgt_poll_group_000" 00:14:45.028 } 00:14:45.028 ]' 00:14:45.028 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.301 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:45.301 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.301 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.301 09:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.301 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.301 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.301 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.560 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:45.560 09:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: --dhchap-ctrl-secret DHHC-1:02:ZThiZTU3YzE4NDhlYWY1MTY0NDlmYjJlZTY5Y2EyZjI0ZWJkYTNmZjE0Y2VjZjgyHo5tgA==: 00:14:46.128 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.128 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:46.128 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.128 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.387 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.387 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.387 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:46.387 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.646 09:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.215 00:14:47.215 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.215 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.215 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.781 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.781 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.781 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.781 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.781 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.781 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.781 { 00:14:47.781 "auth": { 00:14:47.781 "dhgroup": "ffdhe8192", 00:14:47.781 "digest": "sha512", 00:14:47.781 "state": "completed" 00:14:47.781 }, 00:14:47.781 "cntlid": 141, 00:14:47.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:47.781 "listen_address": { 00:14:47.781 "adrfam": "IPv4", 00:14:47.781 "traddr": "10.0.0.2", 00:14:47.782 "trsvcid": "4420", 00:14:47.782 "trtype": "TCP" 00:14:47.782 }, 00:14:47.782 "peer_address": { 00:14:47.782 "adrfam": "IPv4", 00:14:47.782 "traddr": "10.0.0.1", 00:14:47.782 "trsvcid": "37310", 00:14:47.782 "trtype": "TCP" 00:14:47.782 }, 00:14:47.782 "qid": 0, 00:14:47.782 "state": "enabled", 00:14:47.782 "thread": "nvmf_tgt_poll_group_000" 00:14:47.782 } 00:14:47.782 ]' 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.782 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.040 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:48.040 09:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:01:MTMxZjM4YWFlNGFkNDMwODVkYmI0NzNlYmJjODkyZDTEWNvl: 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:48.977 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.234 09:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.802 00:14:49.802 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.802 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.802 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.371 { 00:14:50.371 "auth": { 00:14:50.371 "dhgroup": "ffdhe8192", 00:14:50.371 "digest": "sha512", 00:14:50.371 "state": "completed" 00:14:50.371 }, 00:14:50.371 "cntlid": 143, 00:14:50.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:50.371 "listen_address": { 00:14:50.371 "adrfam": "IPv4", 00:14:50.371 "traddr": "10.0.0.2", 00:14:50.371 "trsvcid": "4420", 00:14:50.371 "trtype": "TCP" 00:14:50.371 }, 00:14:50.371 "peer_address": { 00:14:50.371 "adrfam": "IPv4", 00:14:50.371 "traddr": "10.0.0.1", 00:14:50.371 "trsvcid": "37342", 00:14:50.371 "trtype": "TCP" 00:14:50.371 }, 00:14:50.371 "qid": 0, 00:14:50.371 "state": "enabled", 00:14:50.371 "thread": "nvmf_tgt_poll_group_000" 00:14:50.371 } 00:14:50.371 ]' 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.371 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.630 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:50.630 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:51.567 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.825 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.394 00:14:52.394 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.394 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.394 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.653 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.653 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.653 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.653 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.653 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.653 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.653 { 00:14:52.653 "auth": { 00:14:52.653 "dhgroup": "ffdhe8192", 00:14:52.653 "digest": "sha512", 00:14:52.653 "state": "completed" 00:14:52.653 }, 00:14:52.653 "cntlid": 145, 00:14:52.653 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:52.653 "listen_address": { 00:14:52.653 "adrfam": "IPv4", 00:14:52.653 "traddr": "10.0.0.2", 00:14:52.653 "trsvcid": "4420", 00:14:52.653 "trtype": "TCP" 00:14:52.653 }, 00:14:52.653 "peer_address": { 00:14:52.653 "adrfam": "IPv4", 00:14:52.653 "traddr": "10.0.0.1", 00:14:52.653 "trsvcid": "48682", 00:14:52.653 "trtype": "TCP" 00:14:52.653 }, 00:14:52.653 "qid": 0, 00:14:52.653 "state": "enabled", 00:14:52.653 "thread": "nvmf_tgt_poll_group_000" 00:14:52.653 } 00:14:52.653 ]' 00:14:52.653 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.912 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.912 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.912 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:52.912 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.912 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.912 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.912 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.172 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:53.172 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:00:NzhkZDMwYzQ4ZTE2NWJmNTEwNDI0NWVlNzVkN2QyYzJiZmZmMmQ3MGE4ZGM5YWE2YfM1GA==: --dhchap-ctrl-secret DHHC-1:03:NjMwNjc2ZmFiZDBjNTJlMDEyN2Y2YmRjNzA5ZGI1YmEzODFmY2IzMmU5MmY1NTgxNWVhMGFjNTVkMjM0MDIxNUkfmRI=: 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:54.108 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:54.709 2024/11/20 09:08:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:54.709 request: 00:14:54.709 { 00:14:54.709 "method": "bdev_nvme_attach_controller", 00:14:54.709 "params": { 00:14:54.709 "name": "nvme0", 00:14:54.709 "trtype": "tcp", 00:14:54.709 "traddr": "10.0.0.2", 00:14:54.709 "adrfam": "ipv4", 00:14:54.709 "trsvcid": "4420", 00:14:54.709 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:54.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:54.709 "prchk_reftag": false, 00:14:54.709 "prchk_guard": false, 00:14:54.709 "hdgst": false, 00:14:54.709 "ddgst": false, 00:14:54.709 "dhchap_key": "key2", 00:14:54.709 "allow_unrecognized_csi": false 00:14:54.709 } 00:14:54.709 } 00:14:54.709 Got JSON-RPC error response 00:14:54.709 GoRPCClient: error on JSON-RPC call 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:54.709 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.710 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:54.710 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:54.710 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:54.710 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:54.710 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:55.646 2024/11/20 09:08:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:55.646 request: 00:14:55.646 { 00:14:55.646 "method": "bdev_nvme_attach_controller", 00:14:55.646 "params": { 00:14:55.646 "name": "nvme0", 00:14:55.646 "trtype": "tcp", 00:14:55.646 "traddr": "10.0.0.2", 00:14:55.646 "adrfam": "ipv4", 00:14:55.646 "trsvcid": "4420", 00:14:55.646 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:55.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:55.646 "prchk_reftag": false, 00:14:55.646 "prchk_guard": false, 00:14:55.646 "hdgst": false, 00:14:55.646 "ddgst": false, 00:14:55.646 "dhchap_key": "key1", 00:14:55.646 "dhchap_ctrlr_key": "ckey2", 00:14:55.646 "allow_unrecognized_csi": false 00:14:55.646 } 00:14:55.646 } 00:14:55.646 Got JSON-RPC error response 00:14:55.646 GoRPCClient: error on JSON-RPC call 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.646 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.214 2024/11/20 09:08:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:14:56.214 request: 00:14:56.214 { 00:14:56.214 "method": "bdev_nvme_attach_controller", 00:14:56.214 "params": { 00:14:56.214 "name": "nvme0", 00:14:56.214 "trtype": "tcp", 00:14:56.214 "traddr": "10.0.0.2", 00:14:56.214 "adrfam": "ipv4", 00:14:56.214 "trsvcid": "4420", 00:14:56.214 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:56.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:56.214 "prchk_reftag": false, 00:14:56.214 "prchk_guard": false, 00:14:56.214 "hdgst": false, 00:14:56.214 "ddgst": false, 00:14:56.214 "dhchap_key": "key1", 00:14:56.214 "dhchap_ctrlr_key": "ckey1", 00:14:56.214 "allow_unrecognized_csi": false 00:14:56.214 } 00:14:56.214 } 00:14:56.214 Got JSON-RPC error response 00:14:56.214 GoRPCClient: error on JSON-RPC call 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76766 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76766 ']' 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76766 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76766 00:14:56.214 killing process with pid 76766 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76766' 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76766 00:14:56.214 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76766 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=81766 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 81766 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81766 ']' 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:56.474 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:57.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81766 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81766 ']' 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.851 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 null0 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.CFd 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bBQ ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bBQ 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.nLW 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.4gA ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4gA 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.9rR 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.aTk ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.aTk 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.AnQ 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.111 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.488 nvme0n1 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.488 { 00:14:59.488 "auth": { 00:14:59.488 "dhgroup": "ffdhe8192", 00:14:59.488 "digest": "sha512", 00:14:59.488 "state": "completed" 00:14:59.488 }, 00:14:59.488 "cntlid": 1, 00:14:59.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:14:59.488 "listen_address": { 00:14:59.488 "adrfam": "IPv4", 00:14:59.488 "traddr": "10.0.0.2", 00:14:59.488 "trsvcid": "4420", 00:14:59.488 "trtype": "TCP" 00:14:59.488 }, 00:14:59.488 "peer_address": { 00:14:59.488 "adrfam": "IPv4", 00:14:59.488 "traddr": "10.0.0.1", 00:14:59.488 "trsvcid": "48732", 00:14:59.488 "trtype": "TCP" 00:14:59.488 }, 00:14:59.488 "qid": 0, 00:14:59.488 "state": "enabled", 00:14:59.488 "thread": "nvmf_tgt_poll_group_000" 00:14:59.488 } 00:14:59.488 ]' 00:14:59.488 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.747 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.747 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.747 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.747 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.747 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.747 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.747 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.006 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:15:00.006 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:15:00.573 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key3 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:00.831 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.090 09:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.349 2024/11/20 09:08:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:01.349 request: 00:15:01.349 { 00:15:01.349 "method": "bdev_nvme_attach_controller", 00:15:01.349 "params": { 00:15:01.349 "name": "nvme0", 00:15:01.349 "trtype": "tcp", 00:15:01.349 "traddr": "10.0.0.2", 00:15:01.349 "adrfam": "ipv4", 00:15:01.349 "trsvcid": "4420", 00:15:01.349 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:01.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:15:01.349 "prchk_reftag": false, 00:15:01.349 "prchk_guard": false, 00:15:01.349 "hdgst": false, 00:15:01.349 "ddgst": false, 00:15:01.349 "dhchap_key": "key3", 00:15:01.349 "allow_unrecognized_csi": false 00:15:01.349 } 00:15:01.349 } 00:15:01.349 Got JSON-RPC error response 00:15:01.349 GoRPCClient: error on JSON-RPC call 00:15:01.349 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:01.349 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.349 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.349 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.349 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:01.350 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:01.350 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:01.350 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:01.917 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.177 2024/11/20 09:08:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:02.177 request: 00:15:02.177 { 00:15:02.177 "method": "bdev_nvme_attach_controller", 00:15:02.177 "params": { 00:15:02.177 "name": "nvme0", 00:15:02.177 "trtype": "tcp", 00:15:02.177 "traddr": "10.0.0.2", 00:15:02.177 "adrfam": "ipv4", 00:15:02.177 "trsvcid": "4420", 00:15:02.177 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:02.177 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:15:02.177 "prchk_reftag": false, 00:15:02.177 "prchk_guard": false, 00:15:02.177 "hdgst": false, 00:15:02.177 "ddgst": false, 00:15:02.177 "dhchap_key": "key3", 00:15:02.177 "allow_unrecognized_csi": false 00:15:02.177 } 00:15:02.177 } 00:15:02.177 Got JSON-RPC error response 00:15:02.177 GoRPCClient: error on JSON-RPC call 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.177 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:02.436 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:03.005 2024/11/20 09:08:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:03.005 request: 00:15:03.005 { 00:15:03.005 "method": "bdev_nvme_attach_controller", 00:15:03.005 "params": { 00:15:03.005 "name": "nvme0", 00:15:03.005 "trtype": "tcp", 00:15:03.005 "traddr": "10.0.0.2", 00:15:03.005 "adrfam": "ipv4", 00:15:03.005 "trsvcid": "4420", 00:15:03.005 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:03.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:15:03.005 "prchk_reftag": false, 00:15:03.005 "prchk_guard": false, 00:15:03.005 "hdgst": false, 00:15:03.005 "ddgst": false, 00:15:03.005 "dhchap_key": "key0", 00:15:03.005 "dhchap_ctrlr_key": "key1", 00:15:03.005 "allow_unrecognized_csi": false 00:15:03.005 } 00:15:03.005 } 00:15:03.005 Got JSON-RPC error response 00:15:03.005 GoRPCClient: error on JSON-RPC call 00:15:03.005 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:03.005 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:03.005 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:03.005 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:03.005 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:03.005 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:03.005 09:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:03.265 nvme0n1 00:15:03.265 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:03.265 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:03.265 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.832 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.832 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.832 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.091 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 00:15:04.091 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.091 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.091 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.091 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:04.091 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:04.091 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:05.040 nvme0n1 00:15:05.041 09:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:05.041 09:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:05.041 09:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:05.299 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.866 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.866 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:15:05.866 09:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid 44ab6922-625f-4dd5-abd7-64d78c556468 -l 0 --dhchap-secret DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: --dhchap-ctrl-secret DHHC-1:03:ZjExYWRmYjFlMjFlOTg2OTJhMDY0NzAwY2UzNDdmYWNlMGUzYmVhZjEzOTE3ZmY4ZWMwMWJkMDg1YjAwZWFkZkowSRY=: 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.432 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:06.691 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:07.258 2024/11/20 09:08:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:07.258 request: 00:15:07.258 { 00:15:07.258 "method": "bdev_nvme_attach_controller", 00:15:07.258 "params": { 00:15:07.258 "name": "nvme0", 00:15:07.258 "trtype": "tcp", 00:15:07.258 "traddr": "10.0.0.2", 00:15:07.258 "adrfam": "ipv4", 00:15:07.258 "trsvcid": "4420", 00:15:07.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:07.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468", 00:15:07.258 "prchk_reftag": false, 00:15:07.258 "prchk_guard": false, 00:15:07.258 "hdgst": false, 00:15:07.258 "ddgst": false, 00:15:07.258 "dhchap_key": "key1", 00:15:07.258 "allow_unrecognized_csi": false 00:15:07.258 } 00:15:07.258 } 00:15:07.258 Got JSON-RPC error response 00:15:07.258 GoRPCClient: error on JSON-RPC call 00:15:07.258 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:07.258 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.258 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.258 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.258 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:07.258 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:07.258 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:08.635 nvme0n1 00:15:08.635 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:08.635 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.635 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:08.635 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.635 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.635 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.202 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:15:09.202 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.202 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.202 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.202 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:09.203 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:09.203 09:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:09.461 nvme0n1 00:15:09.461 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:09.461 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.461 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:09.736 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.736 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.736 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: '' 2s 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: ]] 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjhjNmIyMTAzMGM4YTcyMzBlYzkwYzZiYTU2N2FkZjEBLodr: 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:10.301 09:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: 2s 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: ]] 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTE1ZWQwNGQzY2VmYjI0ZGY5NGQ5ZjY3NDY3NTQ5NTBkNDcyYzg4NzcwNThmMjYzgL8I7w==: 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:12.199 09:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:14.101 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:14.101 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:14.101 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:14.101 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:14.101 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:14.101 09:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:14.101 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:14.101 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.361 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:14.361 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.361 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.361 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.361 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:14.361 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:14.361 09:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:15.296 nvme0n1 00:15:15.296 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:15.296 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.296 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.296 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.296 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:15.296 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.229 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:16.229 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.229 09:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:16.487 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.487 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:15:16.487 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.487 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.487 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.487 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:16.487 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:16.745 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:16.745 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.745 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.002 09:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:17.936 2024/11/20 09:08:56 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:17.936 request: 00:15:17.936 { 00:15:17.936 "method": "bdev_nvme_set_keys", 00:15:17.936 "params": { 00:15:17.936 "name": "nvme0", 00:15:17.936 "dhchap_key": "key1", 00:15:17.936 "dhchap_ctrlr_key": "key3" 00:15:17.936 } 00:15:17.936 } 00:15:17.936 Got JSON-RPC error response 00:15:17.936 GoRPCClient: error on JSON-RPC call 00:15:17.936 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:17.936 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.936 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.937 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.937 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:17.937 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.937 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:18.195 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:18.195 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:19.130 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:19.130 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:19.130 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:19.388 09:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:20.762 nvme0n1 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:20.762 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:21.327 2024/11/20 09:09:00 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:21.327 request: 00:15:21.327 { 00:15:21.327 "method": "bdev_nvme_set_keys", 00:15:21.327 "params": { 00:15:21.327 "name": "nvme0", 00:15:21.327 "dhchap_key": "key2", 00:15:21.327 "dhchap_ctrlr_key": "key0" 00:15:21.327 } 00:15:21.327 } 00:15:21.327 Got JSON-RPC error response 00:15:21.327 GoRPCClient: error on JSON-RPC call 00:15:21.327 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:21.327 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.327 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.327 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.327 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:21.327 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.327 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:21.585 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:21.585 09:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:22.525 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:22.525 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.525 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76791 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76791 ']' 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76791 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.783 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76791 00:15:23.042 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:23.042 killing process with pid 76791 00:15:23.042 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:23.042 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76791' 00:15:23.042 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76791 00:15:23.042 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76791 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:23.300 rmmod nvme_tcp 00:15:23.300 rmmod nvme_fabrics 00:15:23.300 rmmod nvme_keyring 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 81766 ']' 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 81766 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81766 ']' 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81766 00:15:23.300 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81766 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.559 killing process with pid 81766 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81766' 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81766 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81766 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:23.559 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # continue 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.CFd /tmp/spdk.key-sha256.nLW /tmp/spdk.key-sha384.9rR /tmp/spdk.key-sha512.AnQ /tmp/spdk.key-sha512.bBQ /tmp/spdk.key-sha384.4gA /tmp/spdk.key-sha256.aTk '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:23.819 00:15:23.819 real 3m24.427s 00:15:23.819 user 8m18.276s 00:15:23.819 sys 0m25.767s 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.819 ************************************ 00:15:23.819 END TEST nvmf_auth_target 00:15:23.819 ************************************ 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:23.819 ************************************ 00:15:23.819 START TEST nvmf_bdevio_no_huge 00:15:23.819 ************************************ 00:15:23.819 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:24.078 * Looking for test storage... 00:15:24.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.079 --rc genhtml_branch_coverage=1 00:15:24.079 --rc genhtml_function_coverage=1 00:15:24.079 --rc genhtml_legend=1 00:15:24.079 --rc geninfo_all_blocks=1 00:15:24.079 --rc geninfo_unexecuted_blocks=1 00:15:24.079 00:15:24.079 ' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.079 --rc genhtml_branch_coverage=1 00:15:24.079 --rc genhtml_function_coverage=1 00:15:24.079 --rc genhtml_legend=1 00:15:24.079 --rc geninfo_all_blocks=1 00:15:24.079 --rc geninfo_unexecuted_blocks=1 00:15:24.079 00:15:24.079 ' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.079 --rc genhtml_branch_coverage=1 00:15:24.079 --rc genhtml_function_coverage=1 00:15:24.079 --rc genhtml_legend=1 00:15:24.079 --rc geninfo_all_blocks=1 00:15:24.079 --rc geninfo_unexecuted_blocks=1 00:15:24.079 00:15:24.079 ' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.079 --rc genhtml_branch_coverage=1 00:15:24.079 --rc genhtml_function_coverage=1 00:15:24.079 --rc genhtml_legend=1 00:15:24.079 --rc geninfo_all_blocks=1 00:15:24.079 --rc geninfo_unexecuted_blocks=1 00:15:24.079 00:15:24.079 ' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:24.079 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:24.080 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@280 -- # nvmf_veth_init 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@223 -- # create_target_ns 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # create_main_bridge 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@105 -- # delete_main_bridge 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:24.080 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:24.339 09:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator0 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target0 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0 up 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target0_br 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target0 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:24.339 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:15:24.340 10.0.0.1 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:15:24.340 10.0.0.2 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator0 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target0_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up initiator1 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@151 -- # set_up target1 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1 up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # set_up target1_br 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:15:24.340 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns target1 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772163 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:15:24.341 10.0.0.3 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772164 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:15:24.341 10.0.0.4 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up initiator1 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:15:24.341 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.600 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.600 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:15:24.600 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:15:24.600 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:15:24.600 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:15:24.600 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@129 -- # set_up target1_br 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 2 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:24.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:24.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:15:24.628 00:15:24.628 --- 10.0.0.1 ping statistics --- 00:15:24.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.628 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:24.628 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:24.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:24.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:15:24.628 00:15:24.628 --- 10.0.0.2 ping statistics --- 00:15:24.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.629 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:15:24.629 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:24.629 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:24.629 00:15:24.629 --- 10.0.0.3 ping statistics --- 00:15:24.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.629 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:15:24.629 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:24.629 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:15:24.629 00:15:24.629 --- 10.0.0.4 ping statistics --- 00:15:24.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:24.629 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # return 0 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator0 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator0 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo initiator1 00:15:24.629 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=initiator1 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target0 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target0 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo target1 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=target1 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=82665 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 82665 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82665 ']' 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.630 09:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:24.889 [2024-11-20 09:09:03.582793] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:15:24.889 [2024-11-20 09:09:03.583101] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:24.889 [2024-11-20 09:09:03.764381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.147 [2024-11-20 09:09:03.858081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.147 [2024-11-20 09:09:03.858615] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.147 [2024-11-20 09:09:03.859345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.147 [2024-11-20 09:09:03.859943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.147 [2024-11-20 09:09:03.860253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.147 [2024-11-20 09:09:03.861461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:25.147 [2024-11-20 09:09:03.861608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:25.147 [2024-11-20 09:09:03.861824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:25.147 [2024-11-20 09:09:03.862036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.082 [2024-11-20 09:09:04.749316] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.082 Malloc0 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:26.082 [2024-11-20 09:09:04.801775] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:15:26.082 { 00:15:26.082 "params": { 00:15:26.082 "name": "Nvme$subsystem", 00:15:26.082 "trtype": "$TEST_TRANSPORT", 00:15:26.082 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:26.082 "adrfam": "ipv4", 00:15:26.082 "trsvcid": "$NVMF_PORT", 00:15:26.082 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:26.082 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:26.082 "hdgst": ${hdgst:-false}, 00:15:26.082 "ddgst": ${ddgst:-false} 00:15:26.082 }, 00:15:26.082 "method": "bdev_nvme_attach_controller" 00:15:26.082 } 00:15:26.082 EOF 00:15:26.082 )") 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:15:26.082 09:09:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:15:26.082 "params": { 00:15:26.082 "name": "Nvme1", 00:15:26.082 "trtype": "tcp", 00:15:26.082 "traddr": "10.0.0.2", 00:15:26.082 "adrfam": "ipv4", 00:15:26.082 "trsvcid": "4420", 00:15:26.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:26.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:26.082 "hdgst": false, 00:15:26.082 "ddgst": false 00:15:26.082 }, 00:15:26.082 "method": "bdev_nvme_attach_controller" 00:15:26.082 }' 00:15:26.082 [2024-11-20 09:09:04.870464] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:15:26.082 [2024-11-20 09:09:04.870554] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82719 ] 00:15:26.340 [2024-11-20 09:09:05.034150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.340 [2024-11-20 09:09:05.110446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.340 [2024-11-20 09:09:05.110539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.340 [2024-11-20 09:09:05.110544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.598 I/O targets: 00:15:26.598 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:26.598 00:15:26.598 00:15:26.598 CUnit - A unit testing framework for C - Version 2.1-3 00:15:26.598 http://cunit.sourceforge.net/ 00:15:26.598 00:15:26.598 00:15:26.598 Suite: bdevio tests on: Nvme1n1 00:15:26.598 Test: blockdev write read block ...passed 00:15:26.598 Test: blockdev write zeroes read block ...passed 00:15:26.598 Test: blockdev write zeroes read no split ...passed 00:15:26.598 Test: blockdev write zeroes read split ...passed 00:15:26.598 Test: blockdev write zeroes read split partial ...passed 00:15:26.598 Test: blockdev reset ...[2024-11-20 09:09:05.470300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:26.598 [2024-11-20 09:09:05.470591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93380 (9): Bad file descriptor 00:15:26.598 [2024-11-20 09:09:05.484527] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:26.598 passed 00:15:26.598 Test: blockdev write read 8 blocks ...passed 00:15:26.598 Test: blockdev write read size > 128k ...passed 00:15:26.598 Test: blockdev write read invalid size ...passed 00:15:26.857 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:26.857 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:26.857 Test: blockdev write read max offset ...passed 00:15:26.857 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:26.857 Test: blockdev writev readv 8 blocks ...passed 00:15:26.857 Test: blockdev writev readv 30 x 1block ...passed 00:15:26.857 Test: blockdev writev readv block ...passed 00:15:26.857 Test: blockdev writev readv size > 128k ...passed 00:15:26.857 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:26.857 Test: blockdev comparev and writev ...[2024-11-20 09:09:05.658692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.658898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.658927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.658942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.659329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.659352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.659370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.659381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.659676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.659692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.659708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.659718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.660092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.660109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.660125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:26.857 [2024-11-20 09:09:05.660136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:26.857 passed 00:15:26.857 Test: blockdev nvme passthru rw ...passed 00:15:26.857 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:09:05.742414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.857 [2024-11-20 09:09:05.742450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.742604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.857 [2024-11-20 09:09:05.742622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:26.857 [2024-11-20 09:09:05.742794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.857 [2024-11-20 09:09:05.742811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:26.857 passed 00:15:26.857 Test: blockdev nvme admin passthru ...[2024-11-20 09:09:05.742985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:26.857 [2024-11-20 09:09:05.743020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:26.857 passed 00:15:27.153 Test: blockdev copy ...passed 00:15:27.153 00:15:27.153 Run Summary: Type Total Ran Passed Failed Inactive 00:15:27.153 suites 1 1 n/a 0 0 00:15:27.153 tests 23 23 23 0 0 00:15:27.153 asserts 152 152 152 0 n/a 00:15:27.153 00:15:27.153 Elapsed time = 0.927 seconds 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:27.416 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:27.416 rmmod nvme_tcp 00:15:27.416 rmmod nvme_fabrics 00:15:27.674 rmmod nvme_keyring 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 82665 ']' 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 82665 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82665 ']' 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82665 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82665 00:15:27.674 killing process with pid 82665 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82665' 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82665 00:15:27.674 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82665 00:15:27.931 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:27.931 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:15:27.931 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:15:27.931 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:27.931 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:27.931 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:27.931 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:15:28.189 09:09:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # continue 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:15:28.189 00:15:28.189 real 0m4.314s 00:15:28.189 user 0m14.255s 00:15:28.189 sys 0m1.776s 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.189 ************************************ 00:15:28.189 END TEST nvmf_bdevio_no_huge 00:15:28.189 ************************************ 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.189 ************************************ 00:15:28.189 START TEST nvmf_tls 00:15:28.189 ************************************ 00:15:28.189 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:28.449 * Looking for test storage... 00:15:28.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.449 --rc genhtml_branch_coverage=1 00:15:28.449 --rc genhtml_function_coverage=1 00:15:28.449 --rc genhtml_legend=1 00:15:28.449 --rc geninfo_all_blocks=1 00:15:28.449 --rc geninfo_unexecuted_blocks=1 00:15:28.449 00:15:28.449 ' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.449 --rc genhtml_branch_coverage=1 00:15:28.449 --rc genhtml_function_coverage=1 00:15:28.449 --rc genhtml_legend=1 00:15:28.449 --rc geninfo_all_blocks=1 00:15:28.449 --rc geninfo_unexecuted_blocks=1 00:15:28.449 00:15:28.449 ' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.449 --rc genhtml_branch_coverage=1 00:15:28.449 --rc genhtml_function_coverage=1 00:15:28.449 --rc genhtml_legend=1 00:15:28.449 --rc geninfo_all_blocks=1 00:15:28.449 --rc geninfo_unexecuted_blocks=1 00:15:28.449 00:15:28.449 ' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.449 --rc genhtml_branch_coverage=1 00:15:28.449 --rc genhtml_function_coverage=1 00:15:28.449 --rc genhtml_legend=1 00:15:28.449 --rc geninfo_all_blocks=1 00:15:28.449 --rc geninfo_unexecuted_blocks=1 00:15:28.449 00:15:28.449 ' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.449 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:28.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@280 -- # nvmf_veth_init 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@223 -- # create_target_ns 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # create_main_bridge 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@105 -- # delete_main_bridge 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:15:28.450 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:15:28.710 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:15:28.710 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:15:28.710 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.710 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:15:28.710 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:15:28.710 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:28.710 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0 up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:28.711 10.0.0.1 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:28.711 10.0.0.2 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target0_br 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:15:28.711 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up initiator1 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@151 -- # set_up target1 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1 up 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # set_up target1_br 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns target1 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772163 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:15:28.712 10.0.0.3 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772164 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:15:28.712 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:15:28.712 10.0.0.4 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up initiator1 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@129 -- # set_up target1_br 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 2 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:28.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:15:28.972 00:15:28.972 --- 10.0.0.1 ping statistics --- 00:15:28.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.972 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.972 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:28.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:15:28.973 00:15:28.973 --- 10.0.0.2 ping statistics --- 00:15:28.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.973 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:15:28.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:28.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:15:28.973 00:15:28.973 --- 10.0.0.3 ping statistics --- 00:15:28.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.973 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:15:28.973 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:28.973 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:15:28.973 00:15:28.973 --- 10.0.0.4 ping statistics --- 00:15:28.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.973 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # return 0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator0 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:28.973 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo initiator1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=initiator1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target0 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target0 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo target1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=target1 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=82962 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 82962 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82962 ']' 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.974 09:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.233 [2024-11-20 09:09:07.949993] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:15:29.233 [2024-11-20 09:09:07.950090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.233 [2024-11-20 09:09:08.109320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.490 [2024-11-20 09:09:08.181438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.490 [2024-11-20 09:09:08.181495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.490 [2024-11-20 09:09:08.181510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.490 [2024-11-20 09:09:08.181520] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.490 [2024-11-20 09:09:08.181529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.490 [2024-11-20 09:09:08.182138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.424 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:30.424 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:30.424 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:30.424 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:30.424 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.424 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.424 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:30.681 true 00:15:30.681 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # jq -r .tls_version 00:15:30.681 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:30.939 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # version=0 00:15:30.939 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # [[ 0 != \0 ]] 00:15:30.939 09:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:31.506 09:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:31.506 09:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # jq -r .tls_version 00:15:31.762 09:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # version=13 00:15:31.762 09:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@78 -- # [[ 13 != \1\3 ]] 00:15:31.762 09:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:32.020 09:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:32.020 09:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # jq -r .tls_version 00:15:32.362 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # version=7 00:15:32.362 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@86 -- # [[ 7 != \7 ]] 00:15:32.362 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:32.362 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # jq -r .enable_ktls 00:15:32.625 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # ktls=false 00:15:32.625 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@93 -- # [[ false != \f\a\l\s\e ]] 00:15:32.625 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:33.189 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:33.189 09:09:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # jq -r .enable_ktls 00:15:33.445 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # ktls=true 00:15:33.445 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@101 -- # [[ true != \t\r\u\e ]] 00:15:33.445 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:33.702 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # jq -r .enable_ktls 00:15:33.702 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # ktls=false 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@109 -- # [[ false != \f\a\l\s\e ]] 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:15:33.959 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # mktemp 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # key_path=/tmp/tmp.Iqli9VGbfh 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # mktemp 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key_2_path=/tmp/tmp.Zom9XS1URw 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # chmod 0600 /tmp/tmp.Iqli9VGbfh 00:15:33.960 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # chmod 0600 /tmp/tmp.Zom9XS1URw 00:15:34.216 09:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:34.473 09:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:35.039 09:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # setup_nvmf_tgt /tmp/tmp.Iqli9VGbfh 00:15:35.039 09:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Iqli9VGbfh 00:15:35.039 09:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:35.039 [2024-11-20 09:09:13.913635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.039 09:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:35.298 09:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:35.556 [2024-11-20 09:09:14.449989] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:35.556 [2024-11-20 09:09:14.450234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.814 09:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:36.072 malloc0 00:15:36.072 09:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:36.330 09:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Iqli9VGbfh 00:15:36.610 09:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:36.869 09:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Iqli9VGbfh 00:15:49.073 Initializing NVMe Controllers 00:15:49.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:49.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:49.073 Initialization complete. Launching workers. 00:15:49.073 ======================================================== 00:15:49.073 Latency(us) 00:15:49.073 Device Information : IOPS MiB/s Average min max 00:15:49.073 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9262.05 36.18 6911.69 1134.38 7662.17 00:15:49.073 ======================================================== 00:15:49.073 Total : 9262.05 36.18 6911.69 1134.38 7662.17 00:15:49.073 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@139 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iqli9VGbfh 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Iqli9VGbfh 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83346 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83346 /var/tmp/bdevperf.sock 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83346 ']' 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.073 09:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:49.073 [2024-11-20 09:09:25.978324] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:15:49.073 [2024-11-20 09:09:25.978591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83346 ] 00:15:49.073 [2024-11-20 09:09:26.130955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.073 [2024-11-20 09:09:26.184699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.073 09:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.073 09:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:49.073 09:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Iqli9VGbfh 00:15:49.073 09:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:49.073 [2024-11-20 09:09:26.912465] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:49.073 TLSTESTn1 00:15:49.073 09:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:49.073 Running I/O for 10 seconds... 00:15:50.266 3879.00 IOPS, 15.15 MiB/s [2024-11-20T09:09:30.119Z] 3945.00 IOPS, 15.41 MiB/s [2024-11-20T09:09:31.592Z] 3963.33 IOPS, 15.48 MiB/s [2024-11-20T09:09:32.183Z] 3974.00 IOPS, 15.52 MiB/s [2024-11-20T09:09:33.118Z] 3976.20 IOPS, 15.53 MiB/s [2024-11-20T09:09:34.494Z] 3987.83 IOPS, 15.58 MiB/s [2024-11-20T09:09:35.429Z] 4053.57 IOPS, 15.83 MiB/s [2024-11-20T09:09:36.364Z] 4115.50 IOPS, 16.08 MiB/s [2024-11-20T09:09:37.299Z] 4167.89 IOPS, 16.28 MiB/s [2024-11-20T09:09:37.299Z] 4196.00 IOPS, 16.39 MiB/s 00:15:58.380 Latency(us) 00:15:58.380 [2024-11-20T09:09:37.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.380 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:58.380 Verification LBA range: start 0x0 length 0x2000 00:15:58.380 TLSTESTn1 : 10.02 4201.18 16.41 0.00 0.00 30409.35 6464.23 24069.59 00:15:58.380 [2024-11-20T09:09:37.299Z] =================================================================================================================== 00:15:58.380 [2024-11-20T09:09:37.299Z] Total : 4201.18 16.41 0.00 0.00 30409.35 6464.23 24069.59 00:15:58.380 { 00:15:58.380 "results": [ 00:15:58.380 { 00:15:58.380 "job": "TLSTESTn1", 00:15:58.380 "core_mask": "0x4", 00:15:58.380 "workload": "verify", 00:15:58.380 "status": "finished", 00:15:58.380 "verify_range": { 00:15:58.380 "start": 0, 00:15:58.380 "length": 8192 00:15:58.380 }, 00:15:58.380 "queue_depth": 128, 00:15:58.380 "io_size": 4096, 00:15:58.380 "runtime": 10.017894, 00:15:58.380 "iops": 4201.182404205914, 00:15:58.380 "mibps": 16.410868766429353, 00:15:58.380 "io_failed": 0, 00:15:58.380 "io_timeout": 0, 00:15:58.380 "avg_latency_us": 30409.354041606457, 00:15:58.380 "min_latency_us": 6464.232727272727, 00:15:58.380 "max_latency_us": 24069.585454545453 00:15:58.380 } 00:15:58.380 ], 00:15:58.380 "core_count": 1 00:15:58.380 } 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83346 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83346 ']' 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83346 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83346 00:15:58.380 killing process with pid 83346 00:15:58.380 Received shutdown signal, test time was about 10.000000 seconds 00:15:58.380 00:15:58.380 Latency(us) 00:15:58.380 [2024-11-20T09:09:37.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.380 [2024-11-20T09:09:37.299Z] =================================================================================================================== 00:15:58.380 [2024-11-20T09:09:37.299Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83346' 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83346 00:15:58.380 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83346 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@142 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zom9XS1URw 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zom9XS1URw 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Zom9XS1URw 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Zom9XS1URw 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:58.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83492 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83492 /var/tmp/bdevperf.sock 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83492 ']' 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.644 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:58.644 [2024-11-20 09:09:37.436946] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:15:58.644 [2024-11-20 09:09:37.437083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83492 ] 00:15:58.903 [2024-11-20 09:09:37.584957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.903 [2024-11-20 09:09:37.633495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.903 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.903 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:58.903 09:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Zom9XS1URw 00:15:59.161 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:59.422 [2024-11-20 09:09:38.297180] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:59.422 [2024-11-20 09:09:38.304045] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:15:59.422 [2024-11-20 09:09:38.305036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddfac0 (107): Transport endpoint is not connected 00:15:59.422 [2024-11-20 09:09:38.306027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddfac0 (9): Bad file descriptor 00:15:59.422 [2024-11-20 09:09:38.307023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:15:59.422 [2024-11-20 09:09:38.307047] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:15:59.422 [2024-11-20 09:09:38.307057] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:15:59.422 [2024-11-20 09:09:38.307073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:15:59.422 2024/11/20 09:09:38 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:59.422 request: 00:15:59.422 { 00:15:59.422 "method": "bdev_nvme_attach_controller", 00:15:59.422 "params": { 00:15:59.422 "name": "TLSTEST", 00:15:59.422 "trtype": "tcp", 00:15:59.422 "traddr": "10.0.0.2", 00:15:59.422 "adrfam": "ipv4", 00:15:59.422 "trsvcid": "4420", 00:15:59.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:59.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:59.422 "prchk_reftag": false, 00:15:59.422 "prchk_guard": false, 00:15:59.422 "hdgst": false, 00:15:59.422 "ddgst": false, 00:15:59.422 "psk": "key0", 00:15:59.422 "allow_unrecognized_csi": false 00:15:59.422 } 00:15:59.422 } 00:15:59.422 Got JSON-RPC error response 00:15:59.422 GoRPCClient: error on JSON-RPC call 00:15:59.422 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83492 00:15:59.422 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83492 ']' 00:15:59.422 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83492 00:15:59.422 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:59.422 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.422 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83492 00:15:59.681 killing process with pid 83492 00:15:59.681 Received shutdown signal, test time was about 10.000000 seconds 00:15:59.681 00:15:59.681 Latency(us) 00:15:59.681 [2024-11-20T09:09:38.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.681 [2024-11-20T09:09:38.600Z] =================================================================================================================== 00:15:59.681 [2024-11-20T09:09:38.600Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83492' 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83492 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83492 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@145 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Iqli9VGbfh 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Iqli9VGbfh 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:59.681 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:59.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Iqli9VGbfh 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Iqli9VGbfh 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83532 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83532 /var/tmp/bdevperf.sock 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83532 ']' 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.682 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:59.940 [2024-11-20 09:09:38.607825] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:15:59.940 [2024-11-20 09:09:38.607916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83532 ] 00:15:59.940 [2024-11-20 09:09:38.750089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.940 [2024-11-20 09:09:38.793385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.197 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.197 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:00.197 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Iqli9VGbfh 00:16:00.454 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:00.712 [2024-11-20 09:09:39.505662] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:00.712 [2024-11-20 09:09:39.514104] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:00.712 [2024-11-20 09:09:39.514163] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:00.712 [2024-11-20 09:09:39.514227] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:00.712 [2024-11-20 09:09:39.514588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x631ac0 (107): Transport endpoint is not connected 00:16:00.712 [2024-11-20 09:09:39.515575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x631ac0 (9): Bad file descriptor 00:16:00.712 [2024-11-20 09:09:39.516572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:00.712 [2024-11-20 09:09:39.516609] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:00.712 [2024-11-20 09:09:39.516635] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:00.712 [2024-11-20 09:09:39.516651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:00.712 2024/11/20 09:09:39 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:00.712 request: 00:16:00.712 { 00:16:00.712 "method": "bdev_nvme_attach_controller", 00:16:00.712 "params": { 00:16:00.712 "name": "TLSTEST", 00:16:00.712 "trtype": "tcp", 00:16:00.712 "traddr": "10.0.0.2", 00:16:00.712 "adrfam": "ipv4", 00:16:00.712 "trsvcid": "4420", 00:16:00.712 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.712 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:00.712 "prchk_reftag": false, 00:16:00.712 "prchk_guard": false, 00:16:00.712 "hdgst": false, 00:16:00.712 "ddgst": false, 00:16:00.712 "psk": "key0", 00:16:00.712 "allow_unrecognized_csi": false 00:16:00.712 } 00:16:00.712 } 00:16:00.712 Got JSON-RPC error response 00:16:00.712 GoRPCClient: error on JSON-RPC call 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83532 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83532 ']' 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83532 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83532 00:16:00.712 killing process with pid 83532 00:16:00.712 Received shutdown signal, test time was about 10.000000 seconds 00:16:00.712 00:16:00.712 Latency(us) 00:16:00.712 [2024-11-20T09:09:39.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.712 [2024-11-20T09:09:39.631Z] =================================================================================================================== 00:16:00.712 [2024-11-20T09:09:39.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83532' 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83532 00:16:00.712 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83532 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@148 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iqli9VGbfh 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iqli9VGbfh 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Iqli9VGbfh 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Iqli9VGbfh 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83571 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83571 /var/tmp/bdevperf.sock 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83571 ']' 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.971 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:00.971 [2024-11-20 09:09:39.807021] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:00.971 [2024-11-20 09:09:39.807128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83571 ] 00:16:01.229 [2024-11-20 09:09:39.949422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.229 [2024-11-20 09:09:39.992862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.229 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.229 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:01.229 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Iqli9VGbfh 00:16:01.488 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:01.747 [2024-11-20 09:09:40.564363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:01.747 [2024-11-20 09:09:40.574236] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:01.747 [2024-11-20 09:09:40.574278] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:01.747 [2024-11-20 09:09:40.574328] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:01.747 [2024-11-20 09:09:40.575227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7ac0 (107): Transport endpoint is not connected 00:16:01.747 [2024-11-20 09:09:40.576216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad7ac0 (9): Bad file descriptor 00:16:01.747 [2024-11-20 09:09:40.577214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:01.747 [2024-11-20 09:09:40.577235] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:01.747 [2024-11-20 09:09:40.577261] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:01.747 [2024-11-20 09:09:40.577281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:01.747 2024/11/20 09:09:40 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:01.747 request: 00:16:01.747 { 00:16:01.747 "method": "bdev_nvme_attach_controller", 00:16:01.747 "params": { 00:16:01.747 "name": "TLSTEST", 00:16:01.747 "trtype": "tcp", 00:16:01.747 "traddr": "10.0.0.2", 00:16:01.747 "adrfam": "ipv4", 00:16:01.747 "trsvcid": "4420", 00:16:01.747 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:01.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:01.747 "prchk_reftag": false, 00:16:01.747 "prchk_guard": false, 00:16:01.747 "hdgst": false, 00:16:01.747 "ddgst": false, 00:16:01.747 "psk": "key0", 00:16:01.747 "allow_unrecognized_csi": false 00:16:01.747 } 00:16:01.747 } 00:16:01.747 Got JSON-RPC error response 00:16:01.747 GoRPCClient: error on JSON-RPC call 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83571 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83571 ']' 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83571 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83571 00:16:01.747 killing process with pid 83571 00:16:01.747 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.747 00:16:01.747 Latency(us) 00:16:01.747 [2024-11-20T09:09:40.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.747 [2024-11-20T09:09:40.666Z] =================================================================================================================== 00:16:01.747 [2024-11-20T09:09:40.666Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83571' 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83571 00:16:01.747 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83571 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@151 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83611 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83611 /var/tmp/bdevperf.sock 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83611 ']' 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:02.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.006 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:02.006 [2024-11-20 09:09:40.874742] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:02.006 [2024-11-20 09:09:40.874871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83611 ] 00:16:02.264 [2024-11-20 09:09:41.015043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.264 [2024-11-20 09:09:41.058728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.228 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.228 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:03.228 09:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:03.228 [2024-11-20 09:09:42.127146] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:03.228 [2024-11-20 09:09:42.127228] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:03.228 2024/11/20 09:09:42 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:03.228 request: 00:16:03.228 { 00:16:03.228 "method": "keyring_file_add_key", 00:16:03.228 "params": { 00:16:03.228 "name": "key0", 00:16:03.228 "path": "" 00:16:03.228 } 00:16:03.228 } 00:16:03.228 Got JSON-RPC error response 00:16:03.228 GoRPCClient: error on JSON-RPC call 00:16:03.487 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:03.487 [2024-11-20 09:09:42.363332] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:03.487 [2024-11-20 09:09:42.363397] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:03.487 2024/11/20 09:09:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:03.487 request: 00:16:03.487 { 00:16:03.487 "method": "bdev_nvme_attach_controller", 00:16:03.487 "params": { 00:16:03.487 "name": "TLSTEST", 00:16:03.487 "trtype": "tcp", 00:16:03.487 "traddr": "10.0.0.2", 00:16:03.487 "adrfam": "ipv4", 00:16:03.487 "trsvcid": "4420", 00:16:03.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:03.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:03.487 "prchk_reftag": false, 00:16:03.487 "prchk_guard": false, 00:16:03.487 "hdgst": false, 00:16:03.487 "ddgst": false, 00:16:03.487 "psk": "key0", 00:16:03.487 "allow_unrecognized_csi": false 00:16:03.487 } 00:16:03.487 } 00:16:03.487 Got JSON-RPC error response 00:16:03.487 GoRPCClient: error on JSON-RPC call 00:16:03.487 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83611 00:16:03.487 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83611 ']' 00:16:03.487 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83611 00:16:03.487 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:03.487 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.487 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83611 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:03.746 killing process with pid 83611 00:16:03.746 Received shutdown signal, test time was about 10.000000 seconds 00:16:03.746 00:16:03.746 Latency(us) 00:16:03.746 [2024-11-20T09:09:42.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:03.746 [2024-11-20T09:09:42.665Z] =================================================================================================================== 00:16:03.746 [2024-11-20T09:09:42.665Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83611' 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83611 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83611 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@154 -- # killprocess 82962 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82962 ']' 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82962 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82962 00:16:03.746 killing process with pid 82962 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82962' 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82962 00:16:03.746 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82962 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # mktemp 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # key_long_path=/tmp/tmp.UyYF3kYxcS 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@157 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # chmod 0600 /tmp/tmp.UyYF3kYxcS 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # nvmfappstart -m 0x2 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=83675 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 83675 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83675 ']' 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.006 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.265 [2024-11-20 09:09:42.948130] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:04.265 [2024-11-20 09:09:42.948245] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.265 [2024-11-20 09:09:43.090652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.265 [2024-11-20 09:09:43.138167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.265 [2024-11-20 09:09:43.138232] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.265 [2024-11-20 09:09:43.138243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.265 [2024-11-20 09:09:43.138252] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.265 [2024-11-20 09:09:43.138260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.265 [2024-11-20 09:09:43.138683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # setup_nvmf_tgt /tmp/tmp.UyYF3kYxcS 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UyYF3kYxcS 00:16:04.524 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:04.782 [2024-11-20 09:09:43.513491] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.782 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:05.041 09:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:05.299 [2024-11-20 09:09:44.045602] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:05.299 [2024-11-20 09:09:44.045872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.299 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:05.558 malloc0 00:16:05.558 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:05.816 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:06.075 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UyYF3kYxcS 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UyYF3kYxcS 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83771 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83771 /var/tmp/bdevperf.sock 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83771 ']' 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.334 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.334 [2024-11-20 09:09:45.134234] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:06.334 [2024-11-20 09:09:45.134406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83771 ] 00:16:06.592 [2024-11-20 09:09:45.289335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.592 [2024-11-20 09:09:45.355406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:07.528 09:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.528 09:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:07.528 09:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:07.529 09:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:07.788 [2024-11-20 09:09:46.593232] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:07.788 TLSTESTn1 00:16:07.788 09:09:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:08.047 Running I/O for 10 seconds... 00:16:09.923 4107.00 IOPS, 16.04 MiB/s [2024-11-20T09:09:50.219Z] 4160.00 IOPS, 16.25 MiB/s [2024-11-20T09:09:51.155Z] 4138.67 IOPS, 16.17 MiB/s [2024-11-20T09:09:52.090Z] 4158.50 IOPS, 16.24 MiB/s [2024-11-20T09:09:53.027Z] 4159.00 IOPS, 16.25 MiB/s [2024-11-20T09:09:53.963Z] 4191.17 IOPS, 16.37 MiB/s [2024-11-20T09:09:54.900Z] 4223.29 IOPS, 16.50 MiB/s [2024-11-20T09:09:55.836Z] 4240.88 IOPS, 16.57 MiB/s [2024-11-20T09:09:57.214Z] 4257.11 IOPS, 16.63 MiB/s [2024-11-20T09:09:57.214Z] 4268.80 IOPS, 16.68 MiB/s 00:16:18.295 Latency(us) 00:16:18.295 [2024-11-20T09:09:57.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.295 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:18.295 Verification LBA range: start 0x0 length 0x2000 00:16:18.295 TLSTESTn1 : 10.01 4274.99 16.70 0.00 0.00 29890.47 5302.46 22997.18 00:16:18.295 [2024-11-20T09:09:57.214Z] =================================================================================================================== 00:16:18.295 [2024-11-20T09:09:57.215Z] Total : 4274.99 16.70 0.00 0.00 29890.47 5302.46 22997.18 00:16:18.296 { 00:16:18.296 "results": [ 00:16:18.296 { 00:16:18.296 "job": "TLSTESTn1", 00:16:18.296 "core_mask": "0x4", 00:16:18.296 "workload": "verify", 00:16:18.296 "status": "finished", 00:16:18.296 "verify_range": { 00:16:18.296 "start": 0, 00:16:18.296 "length": 8192 00:16:18.296 }, 00:16:18.296 "queue_depth": 128, 00:16:18.296 "io_size": 4096, 00:16:18.296 "runtime": 10.014996, 00:16:18.296 "iops": 4274.9892261564555, 00:16:18.296 "mibps": 16.699176664673654, 00:16:18.296 "io_failed": 0, 00:16:18.296 "io_timeout": 0, 00:16:18.296 "avg_latency_us": 29890.474636588715, 00:16:18.296 "min_latency_us": 5302.458181818181, 00:16:18.296 "max_latency_us": 22997.17818181818 00:16:18.296 } 00:16:18.296 ], 00:16:18.296 "core_count": 1 00:16:18.296 } 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83771 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83771 ']' 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83771 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83771 00:16:18.296 killing process with pid 83771 00:16:18.296 Received shutdown signal, test time was about 10.000000 seconds 00:16:18.296 00:16:18.296 Latency(us) 00:16:18.296 [2024-11-20T09:09:57.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:18.296 [2024-11-20T09:09:57.215Z] =================================================================================================================== 00:16:18.296 [2024-11-20T09:09:57.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83771' 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83771 00:16:18.296 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83771 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # chmod 0666 /tmp/tmp.UyYF3kYxcS 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UyYF3kYxcS 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UyYF3kYxcS 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UyYF3kYxcS 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UyYF3kYxcS 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83936 00:16:18.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83936 /var/tmp/bdevperf.sock 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83936 ']' 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.296 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:18.296 [2024-11-20 09:09:57.120136] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:18.296 [2024-11-20 09:09:57.120584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83936 ] 00:16:18.555 [2024-11-20 09:09:57.268404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.555 [2024-11-20 09:09:57.316517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.555 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.555 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:18.555 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:18.814 [2024-11-20 09:09:57.691853] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UyYF3kYxcS': 0100666 00:16:18.814 [2024-11-20 09:09:57.691889] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:18.814 2024/11/20 09:09:57 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.UyYF3kYxcS], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:18.814 request: 00:16:18.814 { 00:16:18.814 "method": "keyring_file_add_key", 00:16:18.814 "params": { 00:16:18.814 "name": "key0", 00:16:18.814 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:18.814 } 00:16:18.814 } 00:16:18.814 Got JSON-RPC error response 00:16:18.814 GoRPCClient: error on JSON-RPC call 00:16:18.814 09:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:19.073 [2024-11-20 09:09:57.984000] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:19.073 [2024-11-20 09:09:57.984068] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:19.073 2024/11/20 09:09:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:19.074 request: 00:16:19.074 { 00:16:19.074 "method": "bdev_nvme_attach_controller", 00:16:19.074 "params": { 00:16:19.074 "name": "TLSTEST", 00:16:19.074 "trtype": "tcp", 00:16:19.074 "traddr": "10.0.0.2", 00:16:19.074 "adrfam": "ipv4", 00:16:19.074 "trsvcid": "4420", 00:16:19.074 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:19.074 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:19.074 "prchk_reftag": false, 00:16:19.074 "prchk_guard": false, 00:16:19.074 "hdgst": false, 00:16:19.074 "ddgst": false, 00:16:19.074 "psk": "key0", 00:16:19.074 "allow_unrecognized_csi": false 00:16:19.074 } 00:16:19.074 } 00:16:19.074 Got JSON-RPC error response 00:16:19.074 GoRPCClient: error on JSON-RPC call 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83936 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83936 ']' 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83936 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83936 00:16:19.333 killing process with pid 83936 00:16:19.333 Received shutdown signal, test time was about 10.000000 seconds 00:16:19.333 00:16:19.333 Latency(us) 00:16:19.333 [2024-11-20T09:09:58.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.333 [2024-11-20T09:09:58.252Z] =================================================================================================================== 00:16:19.333 [2024-11-20T09:09:58.252Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83936' 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83936 00:16:19.333 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83936 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # killprocess 83675 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83675 ']' 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83675 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83675 00:16:19.334 killing process with pid 83675 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83675' 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83675 00:16:19.334 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83675 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # nvmfappstart -m 0x2 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=83981 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 83981 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83981 ']' 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.592 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:19.592 [2024-11-20 09:09:58.507190] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:19.592 [2024-11-20 09:09:58.507284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.851 [2024-11-20 09:09:58.653754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.851 [2024-11-20 09:09:58.696513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.851 [2024-11-20 09:09:58.696576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.851 [2024-11-20 09:09:58.696586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.851 [2024-11-20 09:09:58.696593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.851 [2024-11-20 09:09:58.696600] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.851 [2024-11-20 09:09:58.696958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@173 -- # NOT setup_nvmf_tgt /tmp/tmp.UyYF3kYxcS 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UyYF3kYxcS 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:20.110 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.UyYF3kYxcS 00:16:20.111 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UyYF3kYxcS 00:16:20.111 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:20.369 [2024-11-20 09:09:59.081564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:20.369 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:20.628 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:20.887 [2024-11-20 09:09:59.629692] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:20.887 [2024-11-20 09:09:59.630005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.887 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:21.146 malloc0 00:16:21.146 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:21.404 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:21.663 [2024-11-20 09:10:00.437074] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.UyYF3kYxcS': 0100666 00:16:21.663 [2024-11-20 09:10:00.437119] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:21.663 2024/11/20 09:10:00 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.UyYF3kYxcS], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:21.663 request: 00:16:21.663 { 00:16:21.663 "method": "keyring_file_add_key", 00:16:21.663 "params": { 00:16:21.663 "name": "key0", 00:16:21.663 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:21.663 } 00:16:21.663 } 00:16:21.663 Got JSON-RPC error response 00:16:21.663 GoRPCClient: error on JSON-RPC call 00:16:21.663 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:21.923 [2024-11-20 09:10:00.697215] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:21.923 [2024-11-20 09:10:00.697303] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:21.923 2024/11/20 09:10:00 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:21.923 request: 00:16:21.923 { 00:16:21.923 "method": "nvmf_subsystem_add_host", 00:16:21.923 "params": { 00:16:21.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:21.923 "host": "nqn.2016-06.io.spdk:host1", 00:16:21.923 "psk": "key0" 00:16:21.923 } 00:16:21.923 } 00:16:21.923 Got JSON-RPC error response 00:16:21.923 GoRPCClient: error on JSON-RPC call 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # killprocess 83981 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83981 ']' 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83981 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83981 00:16:21.923 killing process with pid 83981 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83981' 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83981 00:16:21.923 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83981 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # chmod 0600 /tmp/tmp.UyYF3kYxcS 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # nvmfappstart -m 0x2 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84091 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84091 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84091 ']' 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.182 09:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.182 [2024-11-20 09:10:01.021359] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:22.182 [2024-11-20 09:10:01.022056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.441 [2024-11-20 09:10:01.164995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.441 [2024-11-20 09:10:01.216936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.441 [2024-11-20 09:10:01.216978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.441 [2024-11-20 09:10:01.216988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.441 [2024-11-20 09:10:01.216996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.441 [2024-11-20 09:10:01.217002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.441 [2024-11-20 09:10:01.217382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.376 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # setup_nvmf_tgt /tmp/tmp.UyYF3kYxcS 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UyYF3kYxcS 00:16:23.377 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:23.377 [2024-11-20 09:10:02.265236] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.377 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:23.946 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:23.946 [2024-11-20 09:10:02.853400] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:23.946 [2024-11-20 09:10:02.853624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.213 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:24.472 malloc0 00:16:24.472 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:24.731 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:24.990 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # bdevperf_pid=84200 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@183 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # waitforlisten 84200 /var/tmp/bdevperf.sock 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84200 ']' 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.249 09:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.249 [2024-11-20 09:10:03.986367] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:25.249 [2024-11-20 09:10:03.986475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84200 ] 00:16:25.249 [2024-11-20 09:10:04.138290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.508 [2024-11-20 09:10:04.197855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.508 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.508 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:25.508 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:25.767 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:26.025 [2024-11-20 09:10:04.885530] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.285 TLSTESTn1 00:16:26.285 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:26.544 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # tgtconf='{ 00:16:26.544 "subsystems": [ 00:16:26.544 { 00:16:26.544 "subsystem": "keyring", 00:16:26.544 "config": [ 00:16:26.544 { 00:16:26.544 "method": "keyring_file_add_key", 00:16:26.544 "params": { 00:16:26.544 "name": "key0", 00:16:26.544 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:26.544 } 00:16:26.544 } 00:16:26.544 ] 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "subsystem": "iobuf", 00:16:26.544 "config": [ 00:16:26.544 { 00:16:26.544 "method": "iobuf_set_options", 00:16:26.544 "params": { 00:16:26.544 "enable_numa": false, 00:16:26.544 "large_bufsize": 135168, 00:16:26.544 "large_pool_count": 1024, 00:16:26.544 "small_bufsize": 8192, 00:16:26.544 "small_pool_count": 8192 00:16:26.544 } 00:16:26.544 } 00:16:26.544 ] 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "subsystem": "sock", 00:16:26.544 "config": [ 00:16:26.544 { 00:16:26.544 "method": "sock_set_default_impl", 00:16:26.544 "params": { 00:16:26.544 "impl_name": "posix" 00:16:26.544 } 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "method": "sock_impl_set_options", 00:16:26.544 "params": { 00:16:26.544 "enable_ktls": false, 00:16:26.544 "enable_placement_id": 0, 00:16:26.544 "enable_quickack": false, 00:16:26.544 "enable_recv_pipe": true, 00:16:26.544 "enable_zerocopy_send_client": false, 00:16:26.544 "enable_zerocopy_send_server": true, 00:16:26.544 "impl_name": "ssl", 00:16:26.544 "recv_buf_size": 4096, 00:16:26.544 "send_buf_size": 4096, 00:16:26.544 "tls_version": 0, 00:16:26.544 "zerocopy_threshold": 0 00:16:26.544 } 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "method": "sock_impl_set_options", 00:16:26.544 "params": { 00:16:26.544 "enable_ktls": false, 00:16:26.544 "enable_placement_id": 0, 00:16:26.544 "enable_quickack": false, 00:16:26.544 "enable_recv_pipe": true, 00:16:26.544 "enable_zerocopy_send_client": false, 00:16:26.544 "enable_zerocopy_send_server": true, 00:16:26.544 "impl_name": "posix", 00:16:26.544 "recv_buf_size": 2097152, 00:16:26.544 "send_buf_size": 2097152, 00:16:26.544 "tls_version": 0, 00:16:26.544 "zerocopy_threshold": 0 00:16:26.544 } 00:16:26.544 } 00:16:26.544 ] 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "subsystem": "vmd", 00:16:26.544 "config": [] 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "subsystem": "accel", 00:16:26.544 "config": [ 00:16:26.544 { 00:16:26.544 "method": "accel_set_options", 00:16:26.544 "params": { 00:16:26.544 "buf_count": 2048, 00:16:26.544 "large_cache_size": 16, 00:16:26.544 "sequence_count": 2048, 00:16:26.544 "small_cache_size": 128, 00:16:26.544 "task_count": 2048 00:16:26.544 } 00:16:26.544 } 00:16:26.544 ] 00:16:26.544 }, 00:16:26.544 { 00:16:26.544 "subsystem": "bdev", 00:16:26.544 "config": [ 00:16:26.544 { 00:16:26.545 "method": "bdev_set_options", 00:16:26.545 "params": { 00:16:26.545 "bdev_auto_examine": true, 00:16:26.545 "bdev_io_cache_size": 256, 00:16:26.545 "bdev_io_pool_size": 65535, 00:16:26.545 "iobuf_large_cache_size": 16, 00:16:26.545 "iobuf_small_cache_size": 128 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "bdev_raid_set_options", 00:16:26.545 "params": { 00:16:26.545 "process_max_bandwidth_mb_sec": 0, 00:16:26.545 "process_window_size_kb": 1024 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "bdev_iscsi_set_options", 00:16:26.545 "params": { 00:16:26.545 "timeout_sec": 30 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "bdev_nvme_set_options", 00:16:26.545 "params": { 00:16:26.545 "action_on_timeout": "none", 00:16:26.545 "allow_accel_sequence": false, 00:16:26.545 "arbitration_burst": 0, 00:16:26.545 "bdev_retry_count": 3, 00:16:26.545 "ctrlr_loss_timeout_sec": 0, 00:16:26.545 "delay_cmd_submit": true, 00:16:26.545 "dhchap_dhgroups": [ 00:16:26.545 "null", 00:16:26.545 "ffdhe2048", 00:16:26.545 "ffdhe3072", 00:16:26.545 "ffdhe4096", 00:16:26.545 "ffdhe6144", 00:16:26.545 "ffdhe8192" 00:16:26.545 ], 00:16:26.545 "dhchap_digests": [ 00:16:26.545 "sha256", 00:16:26.545 "sha384", 00:16:26.545 "sha512" 00:16:26.545 ], 00:16:26.545 "disable_auto_failback": false, 00:16:26.545 "fast_io_fail_timeout_sec": 0, 00:16:26.545 "generate_uuids": false, 00:16:26.545 "high_priority_weight": 0, 00:16:26.545 "io_path_stat": false, 00:16:26.545 "io_queue_requests": 0, 00:16:26.545 "keep_alive_timeout_ms": 10000, 00:16:26.545 "low_priority_weight": 0, 00:16:26.545 "medium_priority_weight": 0, 00:16:26.545 "nvme_adminq_poll_period_us": 10000, 00:16:26.545 "nvme_error_stat": false, 00:16:26.545 "nvme_ioq_poll_period_us": 0, 00:16:26.545 "rdma_cm_event_timeout_ms": 0, 00:16:26.545 "rdma_max_cq_size": 0, 00:16:26.545 "rdma_srq_size": 0, 00:16:26.545 "reconnect_delay_sec": 0, 00:16:26.545 "timeout_admin_us": 0, 00:16:26.545 "timeout_us": 0, 00:16:26.545 "transport_ack_timeout": 0, 00:16:26.545 "transport_retry_count": 4, 00:16:26.545 "transport_tos": 0 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "bdev_nvme_set_hotplug", 00:16:26.545 "params": { 00:16:26.545 "enable": false, 00:16:26.545 "period_us": 100000 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "bdev_malloc_create", 00:16:26.545 "params": { 00:16:26.545 "block_size": 4096, 00:16:26.545 "dif_is_head_of_md": false, 00:16:26.545 "dif_pi_format": 0, 00:16:26.545 "dif_type": 0, 00:16:26.545 "md_size": 0, 00:16:26.545 "name": "malloc0", 00:16:26.545 "num_blocks": 8192, 00:16:26.545 "optimal_io_boundary": 0, 00:16:26.545 "physical_block_size": 4096, 00:16:26.545 "uuid": "6c5a6a58-0c30-4e65-8ac9-7eadacb31b59" 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "bdev_wait_for_examine" 00:16:26.545 } 00:16:26.545 ] 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "subsystem": "nbd", 00:16:26.545 "config": [] 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "subsystem": "scheduler", 00:16:26.545 "config": [ 00:16:26.545 { 00:16:26.545 "method": "framework_set_scheduler", 00:16:26.545 "params": { 00:16:26.545 "name": "static" 00:16:26.545 } 00:16:26.545 } 00:16:26.545 ] 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "subsystem": "nvmf", 00:16:26.545 "config": [ 00:16:26.545 { 00:16:26.545 "method": "nvmf_set_config", 00:16:26.545 "params": { 00:16:26.545 "admin_cmd_passthru": { 00:16:26.545 "identify_ctrlr": false 00:16:26.545 }, 00:16:26.545 "dhchap_dhgroups": [ 00:16:26.545 "null", 00:16:26.545 "ffdhe2048", 00:16:26.545 "ffdhe3072", 00:16:26.545 "ffdhe4096", 00:16:26.545 "ffdhe6144", 00:16:26.545 "ffdhe8192" 00:16:26.545 ], 00:16:26.545 "dhchap_digests": [ 00:16:26.545 "sha256", 00:16:26.545 "sha384", 00:16:26.545 "sha512" 00:16:26.545 ], 00:16:26.545 "discovery_filter": "match_any" 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "nvmf_set_max_subsystems", 00:16:26.545 "params": { 00:16:26.545 "max_subsystems": 1024 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "nvmf_set_crdt", 00:16:26.545 "params": { 00:16:26.545 "crdt1": 0, 00:16:26.545 "crdt2": 0, 00:16:26.545 "crdt3": 0 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "nvmf_create_transport", 00:16:26.545 "params": { 00:16:26.545 "abort_timeout_sec": 1, 00:16:26.545 "ack_timeout": 0, 00:16:26.545 "buf_cache_size": 4294967295, 00:16:26.545 "c2h_success": false, 00:16:26.545 "data_wr_pool_size": 0, 00:16:26.545 "dif_insert_or_strip": false, 00:16:26.545 "in_capsule_data_size": 4096, 00:16:26.545 "io_unit_size": 131072, 00:16:26.545 "max_aq_depth": 128, 00:16:26.545 "max_io_qpairs_per_ctrlr": 127, 00:16:26.545 "max_io_size": 131072, 00:16:26.545 "max_queue_depth": 128, 00:16:26.545 "num_shared_buffers": 511, 00:16:26.545 "sock_priority": 0, 00:16:26.545 "trtype": "TCP", 00:16:26.545 "zcopy": false 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "nvmf_create_subsystem", 00:16:26.545 "params": { 00:16:26.545 "allow_any_host": false, 00:16:26.545 "ana_reporting": false, 00:16:26.545 "max_cntlid": 65519, 00:16:26.545 "max_namespaces": 10, 00:16:26.545 "min_cntlid": 1, 00:16:26.545 "model_number": "SPDK bdev Controller", 00:16:26.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.545 "serial_number": "SPDK00000000000001" 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "nvmf_subsystem_add_host", 00:16:26.545 "params": { 00:16:26.545 "host": "nqn.2016-06.io.spdk:host1", 00:16:26.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.545 "psk": "key0" 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "nvmf_subsystem_add_ns", 00:16:26.545 "params": { 00:16:26.545 "namespace": { 00:16:26.545 "bdev_name": "malloc0", 00:16:26.545 "nguid": "6C5A6A580C304E658AC97EADACB31B59", 00:16:26.545 "no_auto_visible": false, 00:16:26.545 "nsid": 1, 00:16:26.545 "uuid": "6c5a6a58-0c30-4e65-8ac9-7eadacb31b59" 00:16:26.545 }, 00:16:26.545 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:26.545 } 00:16:26.545 }, 00:16:26.545 { 00:16:26.545 "method": "nvmf_subsystem_add_listener", 00:16:26.545 "params": { 00:16:26.545 "listen_address": { 00:16:26.545 "adrfam": "IPv4", 00:16:26.545 "traddr": "10.0.0.2", 00:16:26.545 "trsvcid": "4420", 00:16:26.545 "trtype": "TCP" 00:16:26.545 }, 00:16:26.545 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:26.545 "secure_channel": true 00:16:26.545 } 00:16:26.545 } 00:16:26.545 ] 00:16:26.545 } 00:16:26.545 ] 00:16:26.545 }' 00:16:26.545 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:27.114 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # bdevperfconf='{ 00:16:27.115 "subsystems": [ 00:16:27.115 { 00:16:27.115 "subsystem": "keyring", 00:16:27.115 "config": [ 00:16:27.115 { 00:16:27.115 "method": "keyring_file_add_key", 00:16:27.115 "params": { 00:16:27.115 "name": "key0", 00:16:27.115 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:27.115 } 00:16:27.115 } 00:16:27.115 ] 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "subsystem": "iobuf", 00:16:27.115 "config": [ 00:16:27.115 { 00:16:27.115 "method": "iobuf_set_options", 00:16:27.115 "params": { 00:16:27.115 "enable_numa": false, 00:16:27.115 "large_bufsize": 135168, 00:16:27.115 "large_pool_count": 1024, 00:16:27.115 "small_bufsize": 8192, 00:16:27.115 "small_pool_count": 8192 00:16:27.115 } 00:16:27.115 } 00:16:27.115 ] 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "subsystem": "sock", 00:16:27.115 "config": [ 00:16:27.115 { 00:16:27.115 "method": "sock_set_default_impl", 00:16:27.115 "params": { 00:16:27.115 "impl_name": "posix" 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "sock_impl_set_options", 00:16:27.115 "params": { 00:16:27.115 "enable_ktls": false, 00:16:27.115 "enable_placement_id": 0, 00:16:27.115 "enable_quickack": false, 00:16:27.115 "enable_recv_pipe": true, 00:16:27.115 "enable_zerocopy_send_client": false, 00:16:27.115 "enable_zerocopy_send_server": true, 00:16:27.115 "impl_name": "ssl", 00:16:27.115 "recv_buf_size": 4096, 00:16:27.115 "send_buf_size": 4096, 00:16:27.115 "tls_version": 0, 00:16:27.115 "zerocopy_threshold": 0 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "sock_impl_set_options", 00:16:27.115 "params": { 00:16:27.115 "enable_ktls": false, 00:16:27.115 "enable_placement_id": 0, 00:16:27.115 "enable_quickack": false, 00:16:27.115 "enable_recv_pipe": true, 00:16:27.115 "enable_zerocopy_send_client": false, 00:16:27.115 "enable_zerocopy_send_server": true, 00:16:27.115 "impl_name": "posix", 00:16:27.115 "recv_buf_size": 2097152, 00:16:27.115 "send_buf_size": 2097152, 00:16:27.115 "tls_version": 0, 00:16:27.115 "zerocopy_threshold": 0 00:16:27.115 } 00:16:27.115 } 00:16:27.115 ] 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "subsystem": "vmd", 00:16:27.115 "config": [] 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "subsystem": "accel", 00:16:27.115 "config": [ 00:16:27.115 { 00:16:27.115 "method": "accel_set_options", 00:16:27.115 "params": { 00:16:27.115 "buf_count": 2048, 00:16:27.115 "large_cache_size": 16, 00:16:27.115 "sequence_count": 2048, 00:16:27.115 "small_cache_size": 128, 00:16:27.115 "task_count": 2048 00:16:27.115 } 00:16:27.115 } 00:16:27.115 ] 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "subsystem": "bdev", 00:16:27.115 "config": [ 00:16:27.115 { 00:16:27.115 "method": "bdev_set_options", 00:16:27.115 "params": { 00:16:27.115 "bdev_auto_examine": true, 00:16:27.115 "bdev_io_cache_size": 256, 00:16:27.115 "bdev_io_pool_size": 65535, 00:16:27.115 "iobuf_large_cache_size": 16, 00:16:27.115 "iobuf_small_cache_size": 128 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "bdev_raid_set_options", 00:16:27.115 "params": { 00:16:27.115 "process_max_bandwidth_mb_sec": 0, 00:16:27.115 "process_window_size_kb": 1024 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "bdev_iscsi_set_options", 00:16:27.115 "params": { 00:16:27.115 "timeout_sec": 30 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "bdev_nvme_set_options", 00:16:27.115 "params": { 00:16:27.115 "action_on_timeout": "none", 00:16:27.115 "allow_accel_sequence": false, 00:16:27.115 "arbitration_burst": 0, 00:16:27.115 "bdev_retry_count": 3, 00:16:27.115 "ctrlr_loss_timeout_sec": 0, 00:16:27.115 "delay_cmd_submit": true, 00:16:27.115 "dhchap_dhgroups": [ 00:16:27.115 "null", 00:16:27.115 "ffdhe2048", 00:16:27.115 "ffdhe3072", 00:16:27.115 "ffdhe4096", 00:16:27.115 "ffdhe6144", 00:16:27.115 "ffdhe8192" 00:16:27.115 ], 00:16:27.115 "dhchap_digests": [ 00:16:27.115 "sha256", 00:16:27.115 "sha384", 00:16:27.115 "sha512" 00:16:27.115 ], 00:16:27.115 "disable_auto_failback": false, 00:16:27.115 "fast_io_fail_timeout_sec": 0, 00:16:27.115 "generate_uuids": false, 00:16:27.115 "high_priority_weight": 0, 00:16:27.115 "io_path_stat": false, 00:16:27.115 "io_queue_requests": 512, 00:16:27.115 "keep_alive_timeout_ms": 10000, 00:16:27.115 "low_priority_weight": 0, 00:16:27.115 "medium_priority_weight": 0, 00:16:27.115 "nvme_adminq_poll_period_us": 10000, 00:16:27.115 "nvme_error_stat": false, 00:16:27.115 "nvme_ioq_poll_period_us": 0, 00:16:27.115 "rdma_cm_event_timeout_ms": 0, 00:16:27.115 "rdma_max_cq_size": 0, 00:16:27.115 "rdma_srq_size": 0, 00:16:27.115 "reconnect_delay_sec": 0, 00:16:27.115 "timeout_admin_us": 0, 00:16:27.115 "timeout_us": 0, 00:16:27.115 "transport_ack_timeout": 0, 00:16:27.115 "transport_retry_count": 4, 00:16:27.115 "transport_tos": 0 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "bdev_nvme_attach_controller", 00:16:27.115 "params": { 00:16:27.115 "adrfam": "IPv4", 00:16:27.115 "ctrlr_loss_timeout_sec": 0, 00:16:27.115 "ddgst": false, 00:16:27.115 "fast_io_fail_timeout_sec": 0, 00:16:27.115 "hdgst": false, 00:16:27.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.115 "multipath": "multipath", 00:16:27.115 "name": "TLSTEST", 00:16:27.115 "prchk_guard": false, 00:16:27.115 "prchk_reftag": false, 00:16:27.115 "psk": "key0", 00:16:27.115 "reconnect_delay_sec": 0, 00:16:27.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.115 "traddr": "10.0.0.2", 00:16:27.115 "trsvcid": "4420", 00:16:27.115 "trtype": "TCP" 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "bdev_nvme_set_hotplug", 00:16:27.115 "params": { 00:16:27.115 "enable": false, 00:16:27.115 "period_us": 100000 00:16:27.115 } 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "method": "bdev_wait_for_examine" 00:16:27.115 } 00:16:27.115 ] 00:16:27.115 }, 00:16:27.115 { 00:16:27.115 "subsystem": "nbd", 00:16:27.115 "config": [] 00:16:27.115 } 00:16:27.115 ] 00:16:27.115 }' 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # killprocess 84200 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84200 ']' 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84200 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84200 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:27.115 killing process with pid 84200 00:16:27.115 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84200' 00:16:27.115 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.115 00:16:27.115 Latency(us) 00:16:27.115 [2024-11-20T09:10:06.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.115 [2024-11-20T09:10:06.034Z] =================================================================================================================== 00:16:27.115 [2024-11-20T09:10:06.034Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84200 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84200 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # killprocess 84091 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84091 ']' 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84091 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84091 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:27.116 killing process with pid 84091 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84091' 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84091 00:16:27.116 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84091 00:16:27.376 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:27.376 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:27.376 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.376 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # echo '{ 00:16:27.376 "subsystems": [ 00:16:27.376 { 00:16:27.376 "subsystem": "keyring", 00:16:27.376 "config": [ 00:16:27.376 { 00:16:27.376 "method": "keyring_file_add_key", 00:16:27.376 "params": { 00:16:27.376 "name": "key0", 00:16:27.376 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:27.376 } 00:16:27.376 } 00:16:27.376 ] 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "subsystem": "iobuf", 00:16:27.376 "config": [ 00:16:27.376 { 00:16:27.376 "method": "iobuf_set_options", 00:16:27.376 "params": { 00:16:27.376 "enable_numa": false, 00:16:27.376 "large_bufsize": 135168, 00:16:27.376 "large_pool_count": 1024, 00:16:27.376 "small_bufsize": 8192, 00:16:27.376 "small_pool_count": 8192 00:16:27.376 } 00:16:27.376 } 00:16:27.376 ] 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "subsystem": "sock", 00:16:27.376 "config": [ 00:16:27.376 { 00:16:27.376 "method": "sock_set_default_impl", 00:16:27.376 "params": { 00:16:27.376 "impl_name": "posix" 00:16:27.376 } 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "method": "sock_impl_set_options", 00:16:27.376 "params": { 00:16:27.376 "enable_ktls": false, 00:16:27.376 "enable_placement_id": 0, 00:16:27.376 "enable_quickack": false, 00:16:27.376 "enable_recv_pipe": true, 00:16:27.376 "enable_zerocopy_send_client": false, 00:16:27.376 "enable_zerocopy_send_server": true, 00:16:27.376 "impl_name": "ssl", 00:16:27.376 "recv_buf_size": 4096, 00:16:27.376 "send_buf_size": 4096, 00:16:27.376 "tls_version": 0, 00:16:27.376 "zerocopy_threshold": 0 00:16:27.376 } 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "method": "sock_impl_set_options", 00:16:27.376 "params": { 00:16:27.376 "enable_ktls": false, 00:16:27.376 "enable_placement_id": 0, 00:16:27.376 "enable_quickack": false, 00:16:27.376 "enable_recv_pipe": true, 00:16:27.376 "enable_zerocopy_send_client": false, 00:16:27.376 "enable_zerocopy_send_server": true, 00:16:27.376 "impl_name": "posix", 00:16:27.376 "recv_buf_size": 2097152, 00:16:27.376 "send_buf_size": 2097152, 00:16:27.376 "tls_version": 0, 00:16:27.376 "zerocopy_threshold": 0 00:16:27.376 } 00:16:27.376 } 00:16:27.376 ] 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "subsystem": "vmd", 00:16:27.376 "config": [] 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "subsystem": "accel", 00:16:27.376 "config": [ 00:16:27.376 { 00:16:27.376 "method": "accel_set_options", 00:16:27.376 "params": { 00:16:27.376 "buf_count": 2048, 00:16:27.376 "large_cache_size": 16, 00:16:27.376 "sequence_count": 2048, 00:16:27.376 "small_cache_size": 128, 00:16:27.376 "task_count": 2048 00:16:27.376 } 00:16:27.376 } 00:16:27.376 ] 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "subsystem": "bdev", 00:16:27.376 "config": [ 00:16:27.376 { 00:16:27.376 "method": "bdev_set_options", 00:16:27.376 "params": { 00:16:27.376 "bdev_auto_examine": true, 00:16:27.376 "bdev_io_cache_size": 256, 00:16:27.376 "bdev_io_pool_size": 65535, 00:16:27.376 "iobuf_large_cache_size": 16, 00:16:27.376 "iobuf_small_cache_size": 128 00:16:27.376 } 00:16:27.376 }, 00:16:27.376 { 00:16:27.376 "method": "bdev_raid_set_options", 00:16:27.376 "params": { 00:16:27.376 "process_max_bandwidth_mb_sec": 0, 00:16:27.377 "process_window_size_kb": 1024 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "bdev_iscsi_set_options", 00:16:27.377 "params": { 00:16:27.377 "timeout_sec": 30 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "bdev_nvme_set_options", 00:16:27.377 "params": { 00:16:27.377 "action_on_timeout": "none", 00:16:27.377 "allow_accel_sequence": false, 00:16:27.377 "arbitration_burst": 0, 00:16:27.377 "bdev_retry_count": 3, 00:16:27.377 "ctrlr_loss_timeout_sec": 0, 00:16:27.377 "delay_cmd_submit": true, 00:16:27.377 "dhchap_dhgroups": [ 00:16:27.377 "null", 00:16:27.377 "ffdhe2048", 00:16:27.377 "ffdhe3072", 00:16:27.377 "ffdhe4096", 00:16:27.377 "ffdhe6144", 00:16:27.377 "ffdhe8192" 00:16:27.377 ], 00:16:27.377 "dhchap_digests": [ 00:16:27.377 "sha256", 00:16:27.377 "sha384", 00:16:27.377 "sha512" 00:16:27.377 ], 00:16:27.377 "disable_auto_failback": false, 00:16:27.377 "fast_io_fail_timeout_sec": 0, 00:16:27.377 "generate_uuids": false, 00:16:27.377 "high_priority_weight": 0, 00:16:27.377 "io_path_stat": false, 00:16:27.377 "io_queue_requests": 0, 00:16:27.377 "keep_alive_timeout_ms": 10000, 00:16:27.377 "low_priority_weight": 0, 00:16:27.377 "medium_priority_weight": 0, 00:16:27.377 "nvme_adminq_poll_period_us": 10000, 00:16:27.377 "nvme_error_stat": false, 00:16:27.377 "nvme_ioq_poll_period_us": 0, 00:16:27.377 "rdma_cm_event_timeout_ms": 0, 00:16:27.377 "rdma_max_cq_size": 0, 00:16:27.377 "rdma_srq_size": 0, 00:16:27.377 "reconnect_delay_sec": 0, 00:16:27.377 "timeout_admin_us": 0, 00:16:27.377 "timeout_us": 0, 00:16:27.377 "transport_ack_timeout": 0, 00:16:27.377 "transport_retry_count": 4, 00:16:27.377 "transport_tos": 0 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "bdev_nvme_set_hotplug", 00:16:27.377 "params": { 00:16:27.377 "enable": false, 00:16:27.377 "period_us": 100000 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "bdev_malloc_create", 00:16:27.377 "params": { 00:16:27.377 "block_size": 4096, 00:16:27.377 "dif_is_head_of_md": false, 00:16:27.377 "dif_pi_format": 0, 00:16:27.377 "dif_type": 0, 00:16:27.377 "md_size": 0, 00:16:27.377 "name": "malloc0", 00:16:27.377 "num_blocks": 8192, 00:16:27.377 "optimal_io_boundary": 0, 00:16:27.377 "physical_block_size": 4096, 00:16:27.377 "uuid": "6c5a6a58-0c30-4e65-8ac9-7eadacb31b59" 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "bdev_wait_for_examine" 00:16:27.377 } 00:16:27.377 ] 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "subsystem": "nbd", 00:16:27.377 "config": [] 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "subsystem": "scheduler", 00:16:27.377 "config": [ 00:16:27.377 { 00:16:27.377 "method": "framework_set_scheduler", 00:16:27.377 "params": { 00:16:27.377 "name": "static" 00:16:27.377 } 00:16:27.377 } 00:16:27.377 ] 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "subsystem": "nvmf", 00:16:27.377 "config": [ 00:16:27.377 { 00:16:27.377 "method": "nvmf_set_config", 00:16:27.377 "params": { 00:16:27.377 "admin_cmd_passthru": { 00:16:27.377 "identify_ctrlr": false 00:16:27.377 }, 00:16:27.377 "dhchap_dhgroups": [ 00:16:27.377 "null", 00:16:27.377 "ffdhe2048", 00:16:27.377 "ffdhe3072", 00:16:27.377 "ffdhe4096", 00:16:27.377 "ffdhe6144", 00:16:27.377 "ffdhe8192" 00:16:27.377 ], 00:16:27.377 "dhchap_digests": [ 00:16:27.377 "sha256", 00:16:27.377 "sha384", 00:16:27.377 "sha512" 00:16:27.377 ], 00:16:27.377 "discovery_filter": "match_any" 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "nvmf_set_max_subsystems", 00:16:27.377 "params": { 00:16:27.377 "max_subsystems": 1024 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "nvmf_set_crdt", 00:16:27.377 "params": { 00:16:27.377 "crdt1": 0, 00:16:27.377 "crdt2": 0, 00:16:27.377 "crdt3": 0 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "nvmf_create_transport", 00:16:27.377 "params": { 00:16:27.377 "abort_timeout_sec": 1, 00:16:27.377 "ack_timeout": 0, 00:16:27.377 "buf_cache_size": 4294967295, 00:16:27.377 "c2h_success": false, 00:16:27.377 "data_wr_pool_size": 0, 00:16:27.377 "dif_insert_or_strip": false, 00:16:27.377 "in_capsule_data_size": 4096, 00:16:27.377 "io_unit_size": 131072, 00:16:27.377 "max_aq_depth": 128, 00:16:27.377 "max_io_qpairs_per_ctrlr": 127, 00:16:27.377 "max_io_size": 131072, 00:16:27.377 "max_queue_depth": 128, 00:16:27.377 "num_shared_buffers": 511, 00:16:27.377 "sock_priority": 0, 00:16:27.377 "trtype": "TCP", 00:16:27.377 "zcopy": false 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "nvmf_create_subsystem", 00:16:27.377 "params": { 00:16:27.377 "allow_any_host": false, 00:16:27.377 "ana_reporting": false, 00:16:27.377 "max_cntlid": 65519, 00:16:27.377 "max_namespaces": 10, 00:16:27.377 "min_cntlid": 1, 00:16:27.377 "model_number": "SPDK bdev Controller", 00:16:27.377 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.377 "serial_number": "SPDK00000000000001" 00:16:27.377 } 00:16:27.377 }, 00:16:27.377 { 00:16:27.377 "method": "nvmf_subsystem_add_host", 00:16:27.378 "params": { 00:16:27.378 "host": "nqn.2016-06.io.spdk:host1", 00:16:27.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.378 "psk": "key0" 00:16:27.378 } 00:16:27.378 }, 00:16:27.378 { 00:16:27.378 "method": "nvmf_subsystem_add_ns", 00:16:27.378 "params": { 00:16:27.378 "namespace": { 00:16:27.378 "bdev_name": "malloc0", 00:16:27.378 "nguid": "6C5A6A580C304E658AC97EADACB31B59", 00:16:27.378 "no_auto_visible": false, 00:16:27.378 "nsid": 1, 00:16:27.378 "uuid": "6c5a6a58-0c30-4e65-8ac9-7eadacb31b59" 00:16:27.378 }, 00:16:27.378 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:27.378 } 00:16:27.378 }, 00:16:27.378 { 00:16:27.378 "method": "nvmf_subsystem_add_listener", 00:16:27.378 "params": { 00:16:27.378 "listen_address": { 00:16:27.378 "adrfam": "IPv4", 00:16:27.378 "trad 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.378 dr": "10.0.0.2", 00:16:27.378 "trsvcid": "4420", 00:16:27.378 "trtype": "TCP" 00:16:27.378 }, 00:16:27.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.378 "secure_channel": true 00:16:27.378 } 00:16:27.378 } 00:16:27.378 ] 00:16:27.378 } 00:16:27.378 ] 00:16:27.378 }' 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84279 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84279 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84279 ']' 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:27.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:27.378 09:10:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:27.378 [2024-11-20 09:10:06.258081] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:27.378 [2024-11-20 09:10:06.258178] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.637 [2024-11-20 09:10:06.407488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.637 [2024-11-20 09:10:06.458335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.637 [2024-11-20 09:10:06.458396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.637 [2024-11-20 09:10:06.458423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:27.637 [2024-11-20 09:10:06.458430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:27.637 [2024-11-20 09:10:06.458437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.637 [2024-11-20 09:10:06.458903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.896 [2024-11-20 09:10:06.697704] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.896 [2024-11-20 09:10:06.729660] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.896 [2024-11-20 09:10:06.730010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # bdevperf_pid=84323 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # waitforlisten 84323 /var/tmp/bdevperf.sock 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84323 ']' 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:28.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:28.465 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # echo '{ 00:16:28.465 "subsystems": [ 00:16:28.465 { 00:16:28.465 "subsystem": "keyring", 00:16:28.465 "config": [ 00:16:28.465 { 00:16:28.465 "method": "keyring_file_add_key", 00:16:28.465 "params": { 00:16:28.465 "name": "key0", 00:16:28.465 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:28.465 } 00:16:28.465 } 00:16:28.465 ] 00:16:28.465 }, 00:16:28.465 { 00:16:28.465 "subsystem": "iobuf", 00:16:28.465 "config": [ 00:16:28.465 { 00:16:28.465 "method": "iobuf_set_options", 00:16:28.465 "params": { 00:16:28.465 "enable_numa": false, 00:16:28.465 "large_bufsize": 135168, 00:16:28.465 "large_pool_count": 1024, 00:16:28.465 "small_bufsize": 8192, 00:16:28.465 "small_pool_count": 8192 00:16:28.465 } 00:16:28.465 } 00:16:28.465 ] 00:16:28.465 }, 00:16:28.465 { 00:16:28.465 "subsystem": "sock", 00:16:28.465 "config": [ 00:16:28.465 { 00:16:28.465 "method": "sock_set_default_impl", 00:16:28.465 "params": { 00:16:28.465 "impl_name": "posix" 00:16:28.465 } 00:16:28.465 }, 00:16:28.465 { 00:16:28.465 "method": "sock_impl_set_options", 00:16:28.465 "params": { 00:16:28.466 "enable_ktls": false, 00:16:28.466 "enable_placement_id": 0, 00:16:28.466 "enable_quickack": false, 00:16:28.466 "enable_recv_pipe": true, 00:16:28.466 "enable_zerocopy_send_client": false, 00:16:28.466 "enable_zerocopy_send_server": true, 00:16:28.466 "impl_name": "ssl", 00:16:28.466 "recv_buf_size": 4096, 00:16:28.466 "send_buf_size": 4096, 00:16:28.466 "tls_version": 0, 00:16:28.466 "zerocopy_threshold": 0 00:16:28.466 } 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "method": "sock_impl_set_options", 00:16:28.466 "params": { 00:16:28.466 "enable_ktls": false, 00:16:28.466 "enable_placement_id": 0, 00:16:28.466 "enable_quickack": false, 00:16:28.466 "enable_recv_pipe": true, 00:16:28.466 "enable_zerocopy_send_client": false, 00:16:28.466 "enable_zerocopy_send_server": true, 00:16:28.466 "impl_name": "posix", 00:16:28.466 "recv_buf_size": 2097152, 00:16:28.466 "send_buf_size": 2097152, 00:16:28.466 "tls_version": 0, 00:16:28.466 "zerocopy_threshold": 0 00:16:28.466 } 00:16:28.466 } 00:16:28.466 ] 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "subsystem": "vmd", 00:16:28.466 "config": [] 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "subsystem": "accel", 00:16:28.466 "config": [ 00:16:28.466 { 00:16:28.466 "method": "accel_set_options", 00:16:28.466 "params": { 00:16:28.466 "buf_count": 2048, 00:16:28.466 "large_cache_size": 16, 00:16:28.466 "sequence_count": 2048, 00:16:28.466 "small_cache_size": 128, 00:16:28.466 "task_count": 2048 00:16:28.466 } 00:16:28.466 } 00:16:28.466 ] 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "subsystem": "bdev", 00:16:28.466 "config": [ 00:16:28.466 { 00:16:28.466 "method": "bdev_set_options", 00:16:28.466 "params": { 00:16:28.466 "bdev_auto_examine": true, 00:16:28.466 "bdev_io_cache_size": 256, 00:16:28.466 "bdev_io_pool_size": 65535, 00:16:28.466 "iobuf_large_cache_size": 16, 00:16:28.466 "iobuf_small_cache_size": 128 00:16:28.466 } 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "method": "bdev_raid_set_options", 00:16:28.466 "params": { 00:16:28.466 "process_max_bandwidth_mb_sec": 0, 00:16:28.466 "process_window_size_kb": 1024 00:16:28.466 } 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "method": "bdev_iscsi_set_options", 00:16:28.466 "params": { 00:16:28.466 "timeout_sec": 30 00:16:28.466 } 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "method": "bdev_nvme_set_options", 00:16:28.466 "params": { 00:16:28.466 "action_on_timeout": "none", 00:16:28.466 "allow_accel_sequence": false, 00:16:28.466 "arbitration_burst": 0, 00:16:28.466 "bdev_retry_count": 3, 00:16:28.466 "ctrlr_loss_timeout_sec": 0, 00:16:28.466 "delay_cmd_submit": true, 00:16:28.466 "dhchap_dhgroups": [ 00:16:28.466 "null", 00:16:28.466 "ffdhe2048", 00:16:28.466 "ffdhe3072", 00:16:28.466 "ffdhe4096", 00:16:28.466 "ffdhe6144", 00:16:28.466 "ffdhe8192" 00:16:28.466 ], 00:16:28.466 "dhchap_digests": [ 00:16:28.466 "sha256", 00:16:28.466 "sha384", 00:16:28.466 "sha512" 00:16:28.466 ], 00:16:28.466 "disable_auto_failback": false, 00:16:28.466 "fast_io_fail_timeout_sec": 0, 00:16:28.466 "generate_uuids": false, 00:16:28.466 "high_priority_weight": 0, 00:16:28.466 "io_path_stat": false, 00:16:28.466 "io_queue_requests": 512, 00:16:28.466 "keep_alive_timeout_ms": 10000, 00:16:28.466 "low_priority_weight": 0, 00:16:28.466 "medium_priority_weight": 0, 00:16:28.466 "nvme_adminq_poll_period_us": 10000, 00:16:28.466 "nvme_error_stat": false, 00:16:28.466 "nvme_ioq_poll_period_us": 0, 00:16:28.466 "rdma_cm_event_timeout_ms": 0, 00:16:28.466 "rdma_max_cq_size": 0, 00:16:28.466 "rdma_srq_size": 0, 00:16:28.466 "reconnect_delay_sec": 0, 00:16:28.466 "timeout_admin_us": 0, 00:16:28.466 "timeout_us": 0, 00:16:28.466 "transport_ack_timeout": 0, 00:16:28.466 "transport_retry_count": 4, 00:16:28.466 "transport_tos": 0 00:16:28.466 } 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "method": "bdev_nvme_attach_controller", 00:16:28.466 "params": { 00:16:28.466 "adrfam": "IPv4", 00:16:28.466 "ctrlr_loss_timeout_sec": 0, 00:16:28.466 "ddgst": false, 00:16:28.466 "fast_io_fail_timeout_sec": 0, 00:16:28.466 "hdgst": false, 00:16:28.466 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.466 "multipath": "multipath", 00:16:28.466 "name": "TLSTEST", 00:16:28.466 "prchk_guard": false, 00:16:28.466 "prchk_reftag": false, 00:16:28.466 "psk": "key0", 00:16:28.466 "reconnect_delay_sec": 0, 00:16:28.466 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.466 "traddr": "10.0.0.2", 00:16:28.466 "trsvcid": "4420", 00:16:28.466 "trtype": "TCP" 00:16:28.466 } 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "method": "bdev_nvme_set_hotplug", 00:16:28.466 "params": { 00:16:28.466 "enable": false, 00:16:28.466 "period_us": 100000 00:16:28.466 } 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "method": "bdev_wait_for_examine" 00:16:28.466 } 00:16:28.466 ] 00:16:28.466 }, 00:16:28.466 { 00:16:28.466 "subsystem": "nbd", 00:16:28.466 "config": [] 00:16:28.466 } 00:16:28.466 ] 00:16:28.466 }' 00:16:28.725 [2024-11-20 09:10:07.413495] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:28.725 [2024-11-20 09:10:07.413622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84323 ] 00:16:28.725 [2024-11-20 09:10:07.566300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.725 [2024-11-20 09:10:07.632066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.984 [2024-11-20 09:10:07.810676] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:29.551 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.551 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:29.551 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:29.810 Running I/O for 10 seconds... 00:16:31.686 3983.00 IOPS, 15.56 MiB/s [2024-11-20T09:10:11.981Z] 4090.50 IOPS, 15.98 MiB/s [2024-11-20T09:10:12.918Z] 4109.67 IOPS, 16.05 MiB/s [2024-11-20T09:10:13.854Z] 4121.75 IOPS, 16.10 MiB/s [2024-11-20T09:10:14.792Z] 4126.20 IOPS, 16.12 MiB/s [2024-11-20T09:10:15.758Z] 4144.33 IOPS, 16.19 MiB/s [2024-11-20T09:10:16.692Z] 4143.29 IOPS, 16.18 MiB/s [2024-11-20T09:10:17.627Z] 4182.88 IOPS, 16.34 MiB/s [2024-11-20T09:10:19.004Z] 4213.67 IOPS, 16.46 MiB/s [2024-11-20T09:10:19.004Z] 4237.80 IOPS, 16.55 MiB/s 00:16:40.085 Latency(us) 00:16:40.085 [2024-11-20T09:10:19.004Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.085 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:40.085 Verification LBA range: start 0x0 length 0x2000 00:16:40.085 TLSTESTn1 : 10.02 4243.60 16.58 0.00 0.00 30109.16 5957.82 34317.03 00:16:40.085 [2024-11-20T09:10:19.004Z] =================================================================================================================== 00:16:40.085 [2024-11-20T09:10:19.004Z] Total : 4243.60 16.58 0.00 0.00 30109.16 5957.82 34317.03 00:16:40.085 { 00:16:40.085 "results": [ 00:16:40.085 { 00:16:40.085 "job": "TLSTESTn1", 00:16:40.085 "core_mask": "0x4", 00:16:40.085 "workload": "verify", 00:16:40.085 "status": "finished", 00:16:40.085 "verify_range": { 00:16:40.085 "start": 0, 00:16:40.085 "length": 8192 00:16:40.085 }, 00:16:40.085 "queue_depth": 128, 00:16:40.085 "io_size": 4096, 00:16:40.085 "runtime": 10.016494, 00:16:40.085 "iops": 4243.600605161846, 00:16:40.085 "mibps": 16.57656486391346, 00:16:40.085 "io_failed": 0, 00:16:40.085 "io_timeout": 0, 00:16:40.085 "avg_latency_us": 30109.16455003144, 00:16:40.085 "min_latency_us": 5957.818181818182, 00:16:40.085 "max_latency_us": 34317.03272727273 00:16:40.085 } 00:16:40.085 ], 00:16:40.085 "core_count": 1 00:16:40.085 } 00:16:40.085 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.085 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # killprocess 84323 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84323 ']' 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84323 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84323 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:40.086 killing process with pid 84323 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84323' 00:16:40.086 Received shutdown signal, test time was about 10.000000 seconds 00:16:40.086 00:16:40.086 Latency(us) 00:16:40.086 [2024-11-20T09:10:19.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.086 [2024-11-20T09:10:19.005Z] =================================================================================================================== 00:16:40.086 [2024-11-20T09:10:19.005Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84323 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84323 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@212 -- # killprocess 84279 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84279 ']' 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84279 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84279 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:40.086 killing process with pid 84279 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84279' 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84279 00:16:40.086 09:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84279 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # nvmfappstart 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84470 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84470 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84470 ']' 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.345 09:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.345 [2024-11-20 09:10:19.144666] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:40.345 [2024-11-20 09:10:19.145480] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.604 [2024-11-20 09:10:19.299144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.604 [2024-11-20 09:10:19.362331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.604 [2024-11-20 09:10:19.362385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.604 [2024-11-20 09:10:19.362398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.604 [2024-11-20 09:10:19.362409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.604 [2024-11-20 09:10:19.362418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.604 [2024-11-20 09:10:19.362901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.171 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:41.171 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:41.171 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:41.171 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:41.171 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:41.429 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.429 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # setup_nvmf_tgt /tmp/tmp.UyYF3kYxcS 00:16:41.429 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.UyYF3kYxcS 00:16:41.429 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:41.688 [2024-11-20 09:10:20.395839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.688 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:41.946 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:42.204 [2024-11-20 09:10:20.899945] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:42.204 [2024-11-20 09:10:20.900236] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.204 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:42.463 malloc0 00:16:42.463 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:42.722 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:42.980 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # bdevperf_pid=84585 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # waitforlisten 84585 /var/tmp/bdevperf.sock 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84585 ']' 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.239 09:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:43.239 [2024-11-20 09:10:21.958472] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:43.239 [2024-11-20 09:10:21.958541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84585 ] 00:16:43.239 [2024-11-20 09:10:22.098201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.239 [2024-11-20 09:10:22.142864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.498 09:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.498 09:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:43.498 09:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:43.757 09:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:44.015 [2024-11-20 09:10:22.783507] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:44.015 nvme0n1 00:16:44.015 09:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:44.274 Running I/O for 1 seconds... 00:16:45.210 4223.00 IOPS, 16.50 MiB/s 00:16:45.211 Latency(us) 00:16:45.211 [2024-11-20T09:10:24.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.211 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:45.211 Verification LBA range: start 0x0 length 0x2000 00:16:45.211 nvme0n1 : 1.03 4228.69 16.52 0.00 0.00 29969.43 7238.75 19660.80 00:16:45.211 [2024-11-20T09:10:24.130Z] =================================================================================================================== 00:16:45.211 [2024-11-20T09:10:24.130Z] Total : 4228.69 16.52 0.00 0.00 29969.43 7238.75 19660.80 00:16:45.211 { 00:16:45.211 "results": [ 00:16:45.211 { 00:16:45.211 "job": "nvme0n1", 00:16:45.211 "core_mask": "0x2", 00:16:45.211 "workload": "verify", 00:16:45.211 "status": "finished", 00:16:45.211 "verify_range": { 00:16:45.211 "start": 0, 00:16:45.211 "length": 8192 00:16:45.211 }, 00:16:45.211 "queue_depth": 128, 00:16:45.211 "io_size": 4096, 00:16:45.211 "runtime": 1.028923, 00:16:45.211 "iops": 4228.693497958545, 00:16:45.211 "mibps": 16.518333976400566, 00:16:45.211 "io_failed": 0, 00:16:45.211 "io_timeout": 0, 00:16:45.211 "avg_latency_us": 29969.427533900252, 00:16:45.211 "min_latency_us": 7238.749090909091, 00:16:45.211 "max_latency_us": 19660.8 00:16:45.211 } 00:16:45.211 ], 00:16:45.211 "core_count": 1 00:16:45.211 } 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@231 -- # killprocess 84585 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84585 ']' 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84585 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84585 00:16:45.211 killing process with pid 84585 00:16:45.211 Received shutdown signal, test time was about 1.000000 seconds 00:16:45.211 00:16:45.211 Latency(us) 00:16:45.211 [2024-11-20T09:10:24.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.211 [2024-11-20T09:10:24.130Z] =================================================================================================================== 00:16:45.211 [2024-11-20T09:10:24.130Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84585' 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84585 00:16:45.211 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84585 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # killprocess 84470 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84470 ']' 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84470 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84470 00:16:45.470 killing process with pid 84470 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84470' 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84470 00:16:45.470 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84470 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # nvmfappstart 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84647 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84647 00:16:45.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84647 ']' 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.729 09:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.729 [2024-11-20 09:10:24.588697] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:45.729 [2024-11-20 09:10:24.589894] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.988 [2024-11-20 09:10:24.739534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.988 [2024-11-20 09:10:24.793396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.988 [2024-11-20 09:10:24.793750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.988 [2024-11-20 09:10:24.793802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:45.988 [2024-11-20 09:10:24.793812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:45.988 [2024-11-20 09:10:24.793819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.988 [2024-11-20 09:10:24.794279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@238 -- # rpc_cmd 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.924 [2024-11-20 09:10:25.603134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.924 malloc0 00:16:46.924 [2024-11-20 09:10:25.634079] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:46.924 [2024-11-20 09:10:25.634502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@251 -- # bdevperf_pid=84697 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@253 -- # waitforlisten 84697 /var/tmp/bdevperf.sock 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@249 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84697 ']' 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.924 09:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.924 [2024-11-20 09:10:25.715013] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:46.924 [2024-11-20 09:10:25.715281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84697 ] 00:16:47.183 [2024-11-20 09:10:25.853953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.183 [2024-11-20 09:10:25.903058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.132 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.132 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:48.132 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UyYF3kYxcS 00:16:48.132 09:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:48.404 [2024-11-20 09:10:27.260093] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.663 nvme0n1 00:16:48.663 09:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:48.663 Running I/O for 1 seconds... 00:16:49.598 4224.00 IOPS, 16.50 MiB/s 00:16:49.598 Latency(us) 00:16:49.598 [2024-11-20T09:10:28.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.598 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:49.598 Verification LBA range: start 0x0 length 0x2000 00:16:49.599 nvme0n1 : 1.03 4239.37 16.56 0.00 0.00 29878.73 9830.40 21209.83 00:16:49.599 [2024-11-20T09:10:28.518Z] =================================================================================================================== 00:16:49.599 [2024-11-20T09:10:28.518Z] Total : 4239.37 16.56 0.00 0.00 29878.73 9830.40 21209.83 00:16:49.599 { 00:16:49.599 "results": [ 00:16:49.599 { 00:16:49.599 "job": "nvme0n1", 00:16:49.599 "core_mask": "0x2", 00:16:49.599 "workload": "verify", 00:16:49.599 "status": "finished", 00:16:49.599 "verify_range": { 00:16:49.599 "start": 0, 00:16:49.599 "length": 8192 00:16:49.599 }, 00:16:49.599 "queue_depth": 128, 00:16:49.599 "io_size": 4096, 00:16:49.599 "runtime": 1.026567, 00:16:49.599 "iops": 4239.3725884428395, 00:16:49.599 "mibps": 16.560049173604842, 00:16:49.599 "io_failed": 0, 00:16:49.599 "io_timeout": 0, 00:16:49.599 "avg_latency_us": 29878.725133689837, 00:16:49.599 "min_latency_us": 9830.4, 00:16:49.599 "max_latency_us": 21209.832727272726 00:16:49.599 } 00:16:49.599 ], 00:16:49.599 "core_count": 1 00:16:49.599 } 00:16:49.858 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # rpc_cmd save_config 00:16:49.858 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.858 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.858 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.858 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # tgtcfg='{ 00:16:49.858 "subsystems": [ 00:16:49.858 { 00:16:49.858 "subsystem": "keyring", 00:16:49.858 "config": [ 00:16:49.858 { 00:16:49.858 "method": "keyring_file_add_key", 00:16:49.858 "params": { 00:16:49.858 "name": "key0", 00:16:49.858 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:49.858 } 00:16:49.858 } 00:16:49.858 ] 00:16:49.858 }, 00:16:49.858 { 00:16:49.858 "subsystem": "iobuf", 00:16:49.858 "config": [ 00:16:49.858 { 00:16:49.858 "method": "iobuf_set_options", 00:16:49.858 "params": { 00:16:49.858 "enable_numa": false, 00:16:49.858 "large_bufsize": 135168, 00:16:49.858 "large_pool_count": 1024, 00:16:49.858 "small_bufsize": 8192, 00:16:49.858 "small_pool_count": 8192 00:16:49.858 } 00:16:49.858 } 00:16:49.858 ] 00:16:49.858 }, 00:16:49.858 { 00:16:49.858 "subsystem": "sock", 00:16:49.858 "config": [ 00:16:49.858 { 00:16:49.858 "method": "sock_set_default_impl", 00:16:49.858 "params": { 00:16:49.858 "impl_name": "posix" 00:16:49.858 } 00:16:49.858 }, 00:16:49.858 { 00:16:49.858 "method": "sock_impl_set_options", 00:16:49.858 "params": { 00:16:49.858 "enable_ktls": false, 00:16:49.858 "enable_placement_id": 0, 00:16:49.858 "enable_quickack": false, 00:16:49.858 "enable_recv_pipe": true, 00:16:49.858 "enable_zerocopy_send_client": false, 00:16:49.858 "enable_zerocopy_send_server": true, 00:16:49.858 "impl_name": "ssl", 00:16:49.858 "recv_buf_size": 4096, 00:16:49.858 "send_buf_size": 4096, 00:16:49.858 "tls_version": 0, 00:16:49.858 "zerocopy_threshold": 0 00:16:49.858 } 00:16:49.858 }, 00:16:49.858 { 00:16:49.858 "method": "sock_impl_set_options", 00:16:49.858 "params": { 00:16:49.858 "enable_ktls": false, 00:16:49.858 "enable_placement_id": 0, 00:16:49.858 "enable_quickack": false, 00:16:49.858 "enable_recv_pipe": true, 00:16:49.858 "enable_zerocopy_send_client": false, 00:16:49.858 "enable_zerocopy_send_server": true, 00:16:49.858 "impl_name": "posix", 00:16:49.858 "recv_buf_size": 2097152, 00:16:49.858 "send_buf_size": 2097152, 00:16:49.858 "tls_version": 0, 00:16:49.858 "zerocopy_threshold": 0 00:16:49.858 } 00:16:49.858 } 00:16:49.858 ] 00:16:49.858 }, 00:16:49.858 { 00:16:49.858 "subsystem": "vmd", 00:16:49.858 "config": [] 00:16:49.858 }, 00:16:49.858 { 00:16:49.858 "subsystem": "accel", 00:16:49.858 "config": [ 00:16:49.858 { 00:16:49.858 "method": "accel_set_options", 00:16:49.858 "params": { 00:16:49.858 "buf_count": 2048, 00:16:49.858 "large_cache_size": 16, 00:16:49.858 "sequence_count": 2048, 00:16:49.858 "small_cache_size": 128, 00:16:49.858 "task_count": 2048 00:16:49.858 } 00:16:49.858 } 00:16:49.858 ] 00:16:49.858 }, 00:16:49.858 { 00:16:49.859 "subsystem": "bdev", 00:16:49.859 "config": [ 00:16:49.859 { 00:16:49.859 "method": "bdev_set_options", 00:16:49.859 "params": { 00:16:49.859 "bdev_auto_examine": true, 00:16:49.859 "bdev_io_cache_size": 256, 00:16:49.859 "bdev_io_pool_size": 65535, 00:16:49.859 "iobuf_large_cache_size": 16, 00:16:49.859 "iobuf_small_cache_size": 128 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "bdev_raid_set_options", 00:16:49.859 "params": { 00:16:49.859 "process_max_bandwidth_mb_sec": 0, 00:16:49.859 "process_window_size_kb": 1024 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "bdev_iscsi_set_options", 00:16:49.859 "params": { 00:16:49.859 "timeout_sec": 30 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "bdev_nvme_set_options", 00:16:49.859 "params": { 00:16:49.859 "action_on_timeout": "none", 00:16:49.859 "allow_accel_sequence": false, 00:16:49.859 "arbitration_burst": 0, 00:16:49.859 "bdev_retry_count": 3, 00:16:49.859 "ctrlr_loss_timeout_sec": 0, 00:16:49.859 "delay_cmd_submit": true, 00:16:49.859 "dhchap_dhgroups": [ 00:16:49.859 "null", 00:16:49.859 "ffdhe2048", 00:16:49.859 "ffdhe3072", 00:16:49.859 "ffdhe4096", 00:16:49.859 "ffdhe6144", 00:16:49.859 "ffdhe8192" 00:16:49.859 ], 00:16:49.859 "dhchap_digests": [ 00:16:49.859 "sha256", 00:16:49.859 "sha384", 00:16:49.859 "sha512" 00:16:49.859 ], 00:16:49.859 "disable_auto_failback": false, 00:16:49.859 "fast_io_fail_timeout_sec": 0, 00:16:49.859 "generate_uuids": false, 00:16:49.859 "high_priority_weight": 0, 00:16:49.859 "io_path_stat": false, 00:16:49.859 "io_queue_requests": 0, 00:16:49.859 "keep_alive_timeout_ms": 10000, 00:16:49.859 "low_priority_weight": 0, 00:16:49.859 "medium_priority_weight": 0, 00:16:49.859 "nvme_adminq_poll_period_us": 10000, 00:16:49.859 "nvme_error_stat": false, 00:16:49.859 "nvme_ioq_poll_period_us": 0, 00:16:49.859 "rdma_cm_event_timeout_ms": 0, 00:16:49.859 "rdma_max_cq_size": 0, 00:16:49.859 "rdma_srq_size": 0, 00:16:49.859 "reconnect_delay_sec": 0, 00:16:49.859 "timeout_admin_us": 0, 00:16:49.859 "timeout_us": 0, 00:16:49.859 "transport_ack_timeout": 0, 00:16:49.859 "transport_retry_count": 4, 00:16:49.859 "transport_tos": 0 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "bdev_nvme_set_hotplug", 00:16:49.859 "params": { 00:16:49.859 "enable": false, 00:16:49.859 "period_us": 100000 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "bdev_malloc_create", 00:16:49.859 "params": { 00:16:49.859 "block_size": 4096, 00:16:49.859 "dif_is_head_of_md": false, 00:16:49.859 "dif_pi_format": 0, 00:16:49.859 "dif_type": 0, 00:16:49.859 "md_size": 0, 00:16:49.859 "name": "malloc0", 00:16:49.859 "num_blocks": 8192, 00:16:49.859 "optimal_io_boundary": 0, 00:16:49.859 "physical_block_size": 4096, 00:16:49.859 "uuid": "47170bd4-b199-4614-aa65-2c547161fafc" 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "bdev_wait_for_examine" 00:16:49.859 } 00:16:49.859 ] 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "subsystem": "nbd", 00:16:49.859 "config": [] 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "subsystem": "scheduler", 00:16:49.859 "config": [ 00:16:49.859 { 00:16:49.859 "method": "framework_set_scheduler", 00:16:49.859 "params": { 00:16:49.859 "name": "static" 00:16:49.859 } 00:16:49.859 } 00:16:49.859 ] 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "subsystem": "nvmf", 00:16:49.859 "config": [ 00:16:49.859 { 00:16:49.859 "method": "nvmf_set_config", 00:16:49.859 "params": { 00:16:49.859 "admin_cmd_passthru": { 00:16:49.859 "identify_ctrlr": false 00:16:49.859 }, 00:16:49.859 "dhchap_dhgroups": [ 00:16:49.859 "null", 00:16:49.859 "ffdhe2048", 00:16:49.859 "ffdhe3072", 00:16:49.859 "ffdhe4096", 00:16:49.859 "ffdhe6144", 00:16:49.859 "ffdhe8192" 00:16:49.859 ], 00:16:49.859 "dhchap_digests": [ 00:16:49.859 "sha256", 00:16:49.859 "sha384", 00:16:49.859 "sha512" 00:16:49.859 ], 00:16:49.859 "discovery_filter": "match_any" 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "nvmf_set_max_subsystems", 00:16:49.859 "params": { 00:16:49.859 "max_subsystems": 1024 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "nvmf_set_crdt", 00:16:49.859 "params": { 00:16:49.859 "crdt1": 0, 00:16:49.859 "crdt2": 0, 00:16:49.859 "crdt3": 0 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "nvmf_create_transport", 00:16:49.859 "params": { 00:16:49.859 "abort_timeout_sec": 1, 00:16:49.859 "ack_timeout": 0, 00:16:49.859 "buf_cache_size": 4294967295, 00:16:49.859 "c2h_success": false, 00:16:49.859 "data_wr_pool_size": 0, 00:16:49.859 "dif_insert_or_strip": false, 00:16:49.859 "in_capsule_data_size": 4096, 00:16:49.859 "io_unit_size": 131072, 00:16:49.859 "max_aq_depth": 128, 00:16:49.859 "max_io_qpairs_per_ctrlr": 127, 00:16:49.859 "max_io_size": 131072, 00:16:49.859 "max_queue_depth": 128, 00:16:49.859 "num_shared_buffers": 511, 00:16:49.859 "sock_priority": 0, 00:16:49.859 "trtype": "TCP", 00:16:49.859 "zcopy": false 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "nvmf_create_subsystem", 00:16:49.859 "params": { 00:16:49.859 "allow_any_host": false, 00:16:49.859 "ana_reporting": false, 00:16:49.859 "max_cntlid": 65519, 00:16:49.859 "max_namespaces": 32, 00:16:49.859 "min_cntlid": 1, 00:16:49.859 "model_number": "SPDK bdev Controller", 00:16:49.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.859 "serial_number": "00000000000000000000" 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "nvmf_subsystem_add_host", 00:16:49.859 "params": { 00:16:49.859 "host": "nqn.2016-06.io.spdk:host1", 00:16:49.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.859 "psk": "key0" 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "nvmf_subsystem_add_ns", 00:16:49.859 "params": { 00:16:49.859 "namespace": { 00:16:49.859 "bdev_name": "malloc0", 00:16:49.859 "nguid": "47170BD4B1994614AA652C547161FAFC", 00:16:49.859 "no_auto_visible": false, 00:16:49.859 "nsid": 1, 00:16:49.859 "uuid": "47170bd4-b199-4614-aa65-2c547161fafc" 00:16:49.859 }, 00:16:49.859 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:49.859 } 00:16:49.859 }, 00:16:49.859 { 00:16:49.859 "method": "nvmf_subsystem_add_listener", 00:16:49.859 "params": { 00:16:49.859 "listen_address": { 00:16:49.859 "adrfam": "IPv4", 00:16:49.859 "traddr": "10.0.0.2", 00:16:49.859 "trsvcid": "4420", 00:16:49.859 "trtype": "TCP" 00:16:49.859 }, 00:16:49.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:49.859 "secure_channel": false, 00:16:49.859 "sock_impl": "ssl" 00:16:49.859 } 00:16:49.859 } 00:16:49.859 ] 00:16:49.859 } 00:16:49.859 ] 00:16:49.859 }' 00:16:49.859 09:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:50.119 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # bperfcfg='{ 00:16:50.119 "subsystems": [ 00:16:50.119 { 00:16:50.119 "subsystem": "keyring", 00:16:50.119 "config": [ 00:16:50.119 { 00:16:50.119 "method": "keyring_file_add_key", 00:16:50.119 "params": { 00:16:50.119 "name": "key0", 00:16:50.119 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:50.119 } 00:16:50.119 } 00:16:50.119 ] 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "subsystem": "iobuf", 00:16:50.119 "config": [ 00:16:50.119 { 00:16:50.119 "method": "iobuf_set_options", 00:16:50.119 "params": { 00:16:50.119 "enable_numa": false, 00:16:50.119 "large_bufsize": 135168, 00:16:50.119 "large_pool_count": 1024, 00:16:50.119 "small_bufsize": 8192, 00:16:50.119 "small_pool_count": 8192 00:16:50.119 } 00:16:50.119 } 00:16:50.119 ] 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "subsystem": "sock", 00:16:50.119 "config": [ 00:16:50.119 { 00:16:50.119 "method": "sock_set_default_impl", 00:16:50.119 "params": { 00:16:50.119 "impl_name": "posix" 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "sock_impl_set_options", 00:16:50.119 "params": { 00:16:50.119 "enable_ktls": false, 00:16:50.119 "enable_placement_id": 0, 00:16:50.119 "enable_quickack": false, 00:16:50.119 "enable_recv_pipe": true, 00:16:50.119 "enable_zerocopy_send_client": false, 00:16:50.119 "enable_zerocopy_send_server": true, 00:16:50.119 "impl_name": "ssl", 00:16:50.119 "recv_buf_size": 4096, 00:16:50.119 "send_buf_size": 4096, 00:16:50.119 "tls_version": 0, 00:16:50.119 "zerocopy_threshold": 0 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "sock_impl_set_options", 00:16:50.119 "params": { 00:16:50.119 "enable_ktls": false, 00:16:50.119 "enable_placement_id": 0, 00:16:50.119 "enable_quickack": false, 00:16:50.119 "enable_recv_pipe": true, 00:16:50.119 "enable_zerocopy_send_client": false, 00:16:50.119 "enable_zerocopy_send_server": true, 00:16:50.119 "impl_name": "posix", 00:16:50.119 "recv_buf_size": 2097152, 00:16:50.119 "send_buf_size": 2097152, 00:16:50.119 "tls_version": 0, 00:16:50.119 "zerocopy_threshold": 0 00:16:50.119 } 00:16:50.119 } 00:16:50.119 ] 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "subsystem": "vmd", 00:16:50.119 "config": [] 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "subsystem": "accel", 00:16:50.119 "config": [ 00:16:50.119 { 00:16:50.119 "method": "accel_set_options", 00:16:50.119 "params": { 00:16:50.119 "buf_count": 2048, 00:16:50.119 "large_cache_size": 16, 00:16:50.119 "sequence_count": 2048, 00:16:50.119 "small_cache_size": 128, 00:16:50.119 "task_count": 2048 00:16:50.119 } 00:16:50.119 } 00:16:50.119 ] 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "subsystem": "bdev", 00:16:50.119 "config": [ 00:16:50.119 { 00:16:50.119 "method": "bdev_set_options", 00:16:50.119 "params": { 00:16:50.119 "bdev_auto_examine": true, 00:16:50.119 "bdev_io_cache_size": 256, 00:16:50.119 "bdev_io_pool_size": 65535, 00:16:50.119 "iobuf_large_cache_size": 16, 00:16:50.119 "iobuf_small_cache_size": 128 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "bdev_raid_set_options", 00:16:50.119 "params": { 00:16:50.119 "process_max_bandwidth_mb_sec": 0, 00:16:50.119 "process_window_size_kb": 1024 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "bdev_iscsi_set_options", 00:16:50.119 "params": { 00:16:50.119 "timeout_sec": 30 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "bdev_nvme_set_options", 00:16:50.119 "params": { 00:16:50.119 "action_on_timeout": "none", 00:16:50.119 "allow_accel_sequence": false, 00:16:50.119 "arbitration_burst": 0, 00:16:50.119 "bdev_retry_count": 3, 00:16:50.119 "ctrlr_loss_timeout_sec": 0, 00:16:50.119 "delay_cmd_submit": true, 00:16:50.119 "dhchap_dhgroups": [ 00:16:50.119 "null", 00:16:50.119 "ffdhe2048", 00:16:50.119 "ffdhe3072", 00:16:50.119 "ffdhe4096", 00:16:50.119 "ffdhe6144", 00:16:50.119 "ffdhe8192" 00:16:50.119 ], 00:16:50.119 "dhchap_digests": [ 00:16:50.119 "sha256", 00:16:50.119 "sha384", 00:16:50.119 "sha512" 00:16:50.119 ], 00:16:50.119 "disable_auto_failback": false, 00:16:50.119 "fast_io_fail_timeout_sec": 0, 00:16:50.119 "generate_uuids": false, 00:16:50.119 "high_priority_weight": 0, 00:16:50.119 "io_path_stat": false, 00:16:50.119 "io_queue_requests": 512, 00:16:50.119 "keep_alive_timeout_ms": 10000, 00:16:50.119 "low_priority_weight": 0, 00:16:50.119 "medium_priority_weight": 0, 00:16:50.119 "nvme_adminq_poll_period_us": 10000, 00:16:50.119 "nvme_error_stat": false, 00:16:50.119 "nvme_ioq_poll_period_us": 0, 00:16:50.119 "rdma_cm_event_timeout_ms": 0, 00:16:50.119 "rdma_max_cq_size": 0, 00:16:50.119 "rdma_srq_size": 0, 00:16:50.119 "reconnect_delay_sec": 0, 00:16:50.119 "timeout_admin_us": 0, 00:16:50.119 "timeout_us": 0, 00:16:50.119 "transport_ack_timeout": 0, 00:16:50.119 "transport_retry_count": 4, 00:16:50.119 "transport_tos": 0 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "bdev_nvme_attach_controller", 00:16:50.119 "params": { 00:16:50.119 "adrfam": "IPv4", 00:16:50.119 "ctrlr_loss_timeout_sec": 0, 00:16:50.119 "ddgst": false, 00:16:50.119 "fast_io_fail_timeout_sec": 0, 00:16:50.119 "hdgst": false, 00:16:50.119 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:50.119 "multipath": "multipath", 00:16:50.119 "name": "nvme0", 00:16:50.119 "prchk_guard": false, 00:16:50.119 "prchk_reftag": false, 00:16:50.119 "psk": "key0", 00:16:50.119 "reconnect_delay_sec": 0, 00:16:50.119 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.119 "traddr": "10.0.0.2", 00:16:50.119 "trsvcid": "4420", 00:16:50.119 "trtype": "TCP" 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "bdev_nvme_set_hotplug", 00:16:50.119 "params": { 00:16:50.119 "enable": false, 00:16:50.119 "period_us": 100000 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "bdev_enable_histogram", 00:16:50.119 "params": { 00:16:50.119 "enable": true, 00:16:50.119 "name": "nvme0n1" 00:16:50.119 } 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "method": "bdev_wait_for_examine" 00:16:50.119 } 00:16:50.119 ] 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "subsystem": "nbd", 00:16:50.119 "config": [] 00:16:50.119 } 00:16:50.119 ] 00:16:50.119 }' 00:16:50.119 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # killprocess 84697 00:16:50.119 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84697 ']' 00:16:50.119 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84697 00:16:50.119 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.119 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.119 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84697 00:16:50.382 killing process with pid 84697 00:16:50.382 Received shutdown signal, test time was about 1.000000 seconds 00:16:50.382 00:16:50.382 Latency(us) 00:16:50.382 [2024-11-20T09:10:29.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.382 [2024-11-20T09:10:29.301Z] =================================================================================================================== 00:16:50.382 [2024-11-20T09:10:29.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84697' 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84697 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84697 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # killprocess 84647 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84647 ']' 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84647 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84647 00:16:50.382 killing process with pid 84647 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84647' 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84647 00:16:50.382 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84647 00:16:50.642 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # nvmfappstart -c /dev/fd/62 00:16:50.642 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # echo '{ 00:16:50.642 "subsystems": [ 00:16:50.642 { 00:16:50.642 "subsystem": "keyring", 00:16:50.642 "config": [ 00:16:50.642 { 00:16:50.642 "method": "keyring_file_add_key", 00:16:50.642 "params": { 00:16:50.642 "name": "key0", 00:16:50.642 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:50.642 } 00:16:50.642 } 00:16:50.642 ] 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "subsystem": "iobuf", 00:16:50.642 "config": [ 00:16:50.642 { 00:16:50.642 "method": "iobuf_set_options", 00:16:50.642 "params": { 00:16:50.642 "enable_numa": false, 00:16:50.642 "large_bufsize": 135168, 00:16:50.642 "large_pool_count": 1024, 00:16:50.642 "small_bufsize": 8192, 00:16:50.642 "small_pool_count": 8192 00:16:50.642 } 00:16:50.642 } 00:16:50.642 ] 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "subsystem": "sock", 00:16:50.642 "config": [ 00:16:50.642 { 00:16:50.642 "method": "sock_set_default_impl", 00:16:50.642 "params": { 00:16:50.642 "impl_name": "posix" 00:16:50.642 } 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "method": "sock_impl_set_options", 00:16:50.642 "params": { 00:16:50.642 "enable_ktls": false, 00:16:50.642 "enable_placement_id": 0, 00:16:50.642 "enable_quickack": false, 00:16:50.642 "enable_recv_pipe": true, 00:16:50.642 "enable_zerocopy_send_client": false, 00:16:50.642 "enable_zerocopy_send_server": true, 00:16:50.642 "impl_name": "ssl", 00:16:50.642 "recv_buf_size": 4096, 00:16:50.642 "send_buf_size": 4096, 00:16:50.642 "tls_version": 0, 00:16:50.642 "zerocopy_threshold": 0 00:16:50.642 } 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "method": "sock_impl_set_options", 00:16:50.642 "params": { 00:16:50.642 "enable_ktls": false, 00:16:50.642 "enable_placement_id": 0, 00:16:50.642 "enable_quickack": false, 00:16:50.642 "enable_recv_pipe": true, 00:16:50.642 "enable_zerocopy_send_client": false, 00:16:50.642 "enable_zerocopy_send_server": true, 00:16:50.642 "impl_name": "posix", 00:16:50.642 "recv_buf_size": 2097152, 00:16:50.642 "send_buf_size": 2097152, 00:16:50.642 "tls_version": 0, 00:16:50.642 "zerocopy_threshold": 0 00:16:50.642 } 00:16:50.642 } 00:16:50.642 ] 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "subsystem": "vmd", 00:16:50.642 "config": [] 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "subsystem": "accel", 00:16:50.642 "config": [ 00:16:50.642 { 00:16:50.642 "method": "accel_set_options", 00:16:50.642 "params": { 00:16:50.642 "buf_count": 2048, 00:16:50.642 "large_cache_size": 16, 00:16:50.642 "sequence_count": 2048, 00:16:50.642 "small_cache_size": 128, 00:16:50.642 "task_count": 2048 00:16:50.642 } 00:16:50.642 } 00:16:50.642 ] 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "subsystem": "bdev", 00:16:50.642 "config": [ 00:16:50.642 { 00:16:50.642 "method": "bdev_set_options", 00:16:50.642 "params": { 00:16:50.642 "bdev_auto_examine": true, 00:16:50.642 "bdev_io_cache_size": 256, 00:16:50.642 "bdev_io_pool_size": 65535, 00:16:50.642 "iobuf_large_cache_size": 16, 00:16:50.642 "iobuf_small_cache_size": 128 00:16:50.642 } 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "method": "bdev_raid_set_options", 00:16:50.642 "params": { 00:16:50.642 "process_max_bandwidth_mb_sec": 0, 00:16:50.642 "process_window_size_kb": 1024 00:16:50.642 } 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "method": "bdev_iscsi_set_options", 00:16:50.642 "params": { 00:16:50.642 "timeout_sec": 30 00:16:50.642 } 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "method": "bdev_nvme_set_options", 00:16:50.642 "params": { 00:16:50.642 "action_on_timeout": "none", 00:16:50.642 "allow_accel_sequence": false, 00:16:50.642 "arbitration_burst": 0, 00:16:50.642 "bdev_retry_count": 3, 00:16:50.642 "ctrlr_loss_timeout_sec": 0, 00:16:50.642 "delay_cmd_submit": true, 00:16:50.642 "dhchap_dhgroups": [ 00:16:50.642 "null", 00:16:50.642 "ffdhe2048", 00:16:50.642 "ffdhe3072", 00:16:50.642 "ffdhe4096", 00:16:50.642 "ffdhe6144", 00:16:50.642 "ffdhe8192" 00:16:50.642 ], 00:16:50.642 "dhchap_digests": [ 00:16:50.642 "sha256", 00:16:50.642 "sha384", 00:16:50.642 "sha512" 00:16:50.642 ], 00:16:50.642 "disable_auto_failback": false, 00:16:50.642 "fast_io_fail_timeout_sec": 0, 00:16:50.642 "generate_uuids": false, 00:16:50.642 "high_priority_weight": 0, 00:16:50.642 "io_path_stat": false, 00:16:50.642 "io_queue_requests": 0, 00:16:50.642 "keep_alive_timeout_ms": 10000, 00:16:50.642 "low_priority_weight": 0, 00:16:50.642 "medium_priority_weight": 0, 00:16:50.642 "nvme_adminq_poll_period_us": 10000, 00:16:50.642 "nvme_error_stat": false, 00:16:50.642 "nvme_ioq_poll_period_us": 0, 00:16:50.642 "rdma_cm_event_timeout_ms": 0, 00:16:50.642 "rdma_max_cq_size": 0, 00:16:50.642 "rdma_srq_size": 0, 00:16:50.642 "reconnect_delay_sec": 0, 00:16:50.642 "timeout_admin_us": 0, 00:16:50.642 "timeout_us": 0, 00:16:50.642 "transport_ack_timeout": 0, 00:16:50.642 "transport_retry_count": 4, 00:16:50.642 "transport_tos": 0 00:16:50.642 } 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "method": "bdev_nvme_set_hotplug", 00:16:50.642 "params": { 00:16:50.642 "enable": false, 00:16:50.642 "period_us": 100000 00:16:50.642 } 00:16:50.642 }, 00:16:50.642 { 00:16:50.642 "method": "bdev_malloc_create", 00:16:50.642 "params": { 00:16:50.643 "block_size": 4096, 00:16:50.643 "dif_is_head_of_md": false, 00:16:50.643 "dif_pi_format": 0, 00:16:50.643 "dif_type": 0, 00:16:50.643 "md_size": 0, 00:16:50.643 "name": "malloc0", 00:16:50.643 "num_blocks": 8192, 00:16:50.643 "optimal_io_boundary": 0, 00:16:50.643 "physical_block_size": 4096, 00:16:50.643 "uuid": "47170bd4-b199-4614-aa65-2c547161fafc" 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "bdev_wait_for_examine" 00:16:50.643 } 00:16:50.643 ] 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "subsystem": "nbd", 00:16:50.643 "config": [] 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "subsystem": "scheduler", 00:16:50.643 "config": [ 00:16:50.643 { 00:16:50.643 "method": "framework_set_scheduler", 00:16:50.643 "params": { 00:16:50.643 "name": "static" 00:16:50.643 } 00:16:50.643 } 00:16:50.643 ] 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "subsystem": "nvmf", 00:16:50.643 "config": [ 00:16:50.643 { 00:16:50.643 "method": "nvmf_set_config", 00:16:50.643 "params": { 00:16:50.643 "admin_cmd_passthru": { 00:16:50.643 "identify_ctrlr": false 00:16:50.643 }, 00:16:50.643 "dhchap_dhgroups": [ 00:16:50.643 "null", 00:16:50.643 "ffdhe2048", 00:16:50.643 "ffdhe3072", 00:16:50.643 "ffdhe4096", 00:16:50.643 "ffdhe6144", 00:16:50.643 "ffdhe8192" 00:16:50.643 ], 00:16:50.643 "dhchap_digests": [ 00:16:50.643 "sha256", 00:16:50.643 "sha384", 00:16:50.643 "sha512" 00:16:50.643 ], 00:16:50.643 "discovery_filter": "match_any" 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "nvmf_set_max_subsystems", 00:16:50.643 "params": { 00:16:50.643 "max_subsystems": 1024 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "nvmf_set_crdt", 00:16:50.643 "params": { 00:16:50.643 "crdt1": 0, 00:16:50.643 "crdt2": 0, 00:16:50.643 "crdt3": 0 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "nvmf_create_transport", 00:16:50.643 "params": { 00:16:50.643 "abort_timeout_sec": 1, 00:16:50.643 "ack_timeout": 0, 00:16:50.643 "buf_cache_size": 4294967295, 00:16:50.643 "c2h_success": false, 00:16:50.643 "data_wr_pool_size": 0, 00:16:50.643 "dif_insert_or_strip": false, 00:16:50.643 "in_capsule_data_size": 4096, 00:16:50.643 "io_unit_size": 131072, 00:16:50.643 "max_aq_depth": 128, 00:16:50.643 "max_io_qpairs_per_ctrlr": 127, 00:16:50.643 "max_io_size": 131072, 00:16:50.643 "max_queue_depth": 128, 00:16:50.643 "num_shared_buffers": 511, 00:16:50.643 "sock_priority": 0, 00:16:50.643 "trtype": "TCP", 00:16:50.643 "zcopy": false 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "nvmf_create_subsystem", 00:16:50.643 "params": { 00:16:50.643 "allow_any_host": false, 00:16:50.643 "ana_reporting": false, 00:16:50.643 "max_cntlid": 65519, 00:16:50.643 "max_namespaces": 32, 00:16:50.643 "min_cntlid": 1, 00:16:50.643 "model_number": "SPDK bdev Controller", 00:16:50.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.643 "serial_number": "00000000000000000000" 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "nvmf_subsystem_add_host", 00:16:50.643 "params": { 00:16:50.643 "host": "nqn.2016-06.io.spdk:host1", 00:16:50.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.643 "psk": "key0" 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "nvmf_subsystem_add_ns", 00:16:50.643 "params": { 00:16:50.643 "namespace": { 00:16:50.643 "bdev_name": "malloc0", 00:16:50.643 "nguid": "47170BD4B1994614AA652C547161FAFC", 00:16:50.643 "no_auto_visible": false, 00:16:50.643 "nsid": 1, 00:16:50.643 "uuid": "47170bd4-b199-4614-aa65-2c547161fafc" 00:16:50.643 }, 00:16:50.643 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:50.643 } 00:16:50.643 }, 00:16:50.643 { 00:16:50.643 "method": "nvmf_subsystem_add_listener", 00:16:50.643 "params": { 00:16:50.643 "listen_address": { 00:16:50.643 "adrfam": "IPv4", 00:16:50.643 "traddr": "10.0.0.2", 00:16:50.643 "trsvcid": "4420", 00:16:50.643 "trtype": "TCP" 00:16:50.643 }, 00:16:50.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:50.643 "secure_channel": false, 00:16:50.643 "sock_impl": "ssl" 00:16:50.643 } 00:16:50.643 } 00:16:50.643 ] 00:16:50.643 } 00:16:50.643 ] 00:16:50.643 }' 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=84788 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 84788 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84788 ']' 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.643 09:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:50.643 [2024-11-20 09:10:29.539275] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:50.643 [2024-11-20 09:10:29.539375] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.902 [2024-11-20 09:10:29.682453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.902 [2024-11-20 09:10:29.731229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.902 [2024-11-20 09:10:29.731278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.902 [2024-11-20 09:10:29.731305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.902 [2024-11-20 09:10:29.731313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.902 [2024-11-20 09:10:29.731320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.902 [2024-11-20 09:10:29.731746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.161 [2024-11-20 09:10:29.966014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.161 [2024-11-20 09:10:29.997980] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.161 [2024-11-20 09:10:29.998180] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.728 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.728 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:51.728 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:51.728 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.728 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # bdevperf_pid=84832 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # waitforlisten 84832 /var/tmp/bdevperf.sock 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84832 ']' 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:51.987 09:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:16:51.987 "subsystems": [ 00:16:51.987 { 00:16:51.987 "subsystem": "keyring", 00:16:51.987 "config": [ 00:16:51.987 { 00:16:51.987 "method": "keyring_file_add_key", 00:16:51.987 "params": { 00:16:51.987 "name": "key0", 00:16:51.987 "path": "/tmp/tmp.UyYF3kYxcS" 00:16:51.987 } 00:16:51.988 } 00:16:51.988 ] 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "subsystem": "iobuf", 00:16:51.988 "config": [ 00:16:51.988 { 00:16:51.988 "method": "iobuf_set_options", 00:16:51.988 "params": { 00:16:51.988 "enable_numa": false, 00:16:51.988 "large_bufsize": 135168, 00:16:51.988 "large_pool_count": 1024, 00:16:51.988 "small_bufsize": 8192, 00:16:51.988 "small_pool_count": 8192 00:16:51.988 } 00:16:51.988 } 00:16:51.988 ] 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "subsystem": "sock", 00:16:51.988 "config": [ 00:16:51.988 { 00:16:51.988 "method": "sock_set_default_impl", 00:16:51.988 "params": { 00:16:51.988 "impl_name": "posix" 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "sock_impl_set_options", 00:16:51.988 "params": { 00:16:51.988 "enable_ktls": false, 00:16:51.988 "enable_placement_id": 0, 00:16:51.988 "enable_quickack": false, 00:16:51.988 "enable_recv_pipe": true, 00:16:51.988 "enable_zerocopy_send_client": false, 00:16:51.988 "enable_zerocopy_send_server": true, 00:16:51.988 "impl_name": "ssl", 00:16:51.988 "recv_buf_size": 4096, 00:16:51.988 "send_buf_size": 4096, 00:16:51.988 "tls_version": 0, 00:16:51.988 "zerocopy_threshold": 0 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "sock_impl_set_options", 00:16:51.988 "params": { 00:16:51.988 "enable_ktls": false, 00:16:51.988 "enable_placement_id": 0, 00:16:51.988 "enable_quickack": false, 00:16:51.988 "enable_recv_pipe": true, 00:16:51.988 "enable_zerocopy_send_client": false, 00:16:51.988 "enable_zerocopy_send_server": true, 00:16:51.988 "impl_name": "posix", 00:16:51.988 "recv_buf_size": 2097152, 00:16:51.988 "send_buf_size": 2097152, 00:16:51.988 "tls_version": 0, 00:16:51.988 "zerocopy_threshold": 0 00:16:51.988 } 00:16:51.988 } 00:16:51.988 ] 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "subsystem": "vmd", 00:16:51.988 "config": [] 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "subsystem": "accel", 00:16:51.988 "config": [ 00:16:51.988 { 00:16:51.988 "method": "accel_set_options", 00:16:51.988 "params": { 00:16:51.988 "buf_count": 2048, 00:16:51.988 "large_cache_size": 16, 00:16:51.988 "sequence_count": 2048, 00:16:51.988 "small_cache_size": 128, 00:16:51.988 "task_count": 2048 00:16:51.988 } 00:16:51.988 } 00:16:51.988 ] 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "subsystem": "bdev", 00:16:51.988 "config": [ 00:16:51.988 { 00:16:51.988 "method": "bdev_set_options", 00:16:51.988 "params": { 00:16:51.988 "bdev_auto_examine": true, 00:16:51.988 "bdev_io_cache_size": 256, 00:16:51.988 "bdev_io_pool_size": 65535, 00:16:51.988 "iobuf_large_cache_size": 16, 00:16:51.988 "iobuf_small_cache_size": 128 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "bdev_raid_set_options", 00:16:51.988 "params": { 00:16:51.988 "process_max_bandwidth_mb_sec": 0, 00:16:51.988 "process_window_size_kb": 1024 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "bdev_iscsi_set_options", 00:16:51.988 "params": { 00:16:51.988 "timeout_sec": 30 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "bdev_nvme_set_options", 00:16:51.988 "params": { 00:16:51.988 "action_on_timeout": "none", 00:16:51.988 "allow_accel_sequence": false, 00:16:51.988 "arbitration_burst": 0, 00:16:51.988 "bdev_retry_count": 3, 00:16:51.988 "ctrlr_loss_timeout_sec": 0, 00:16:51.988 "delay_cmd_submit": true, 00:16:51.988 "dhchap_dhgroups": [ 00:16:51.988 "null", 00:16:51.988 "ffdhe2048", 00:16:51.988 "ffdhe3072", 00:16:51.988 "ffdhe4096", 00:16:51.988 "ffdhe6144", 00:16:51.988 "ffdhe8192" 00:16:51.988 ], 00:16:51.988 "dhchap_digests": [ 00:16:51.988 "sha256", 00:16:51.988 "sha384", 00:16:51.988 "sha512" 00:16:51.988 ], 00:16:51.988 "disable_auto_failback": false, 00:16:51.988 "fast_io_fail_timeout_sec": 0, 00:16:51.988 "generate_uuids": false, 00:16:51.988 "high_priority_weight": 0, 00:16:51.988 "io_path_stat": false, 00:16:51.988 "io_queue_requests": 512, 00:16:51.988 "keep_alive_timeout_ms": 10000, 00:16:51.988 "low_priority_weight": 0, 00:16:51.988 "medium_priority_weight": 0, 00:16:51.988 "nvme_adminq_poll_period_us": 10000, 00:16:51.988 "nvme_error_stat": false, 00:16:51.988 "nvme_ioq_poll_period_us": 0, 00:16:51.988 "rdma_cm_event_timeout_ms": 0, 00:16:51.988 "rdma_max_cq_size": 0, 00:16:51.988 "rdma_srq_size": 0, 00:16:51.988 "reconnect_delay_sec": 0, 00:16:51.988 "timeout_admin_us": 0, 00:16:51.988 "timeout_us": 0, 00:16:51.988 "transport_ack_timeout": 0, 00:16:51.988 "transport_retry_count": 4, 00:16:51.988 "transport_tos": 0 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "bdev_nvme_attach_controller", 00:16:51.988 "params": { 00:16:51.988 "adrfam": "IPv4", 00:16:51.988 "ctrlr_loss_timeout_sec": 0, 00:16:51.988 "ddgst": false, 00:16:51.988 "fast_io_fail_timeout_sec": 0, 00:16:51.988 "hdgst": false, 00:16:51.988 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.988 "multipath": "multipath", 00:16:51.988 "name": "nvme0", 00:16:51.988 "prchk_guard": false, 00:16:51.988 "prchk_reftag": false, 00:16:51.988 "psk": "key0", 00:16:51.988 "reconnect_delay_sec": 0, 00:16:51.988 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.988 "traddr": "10.0.0.2", 00:16:51.988 "trsvcid": "4420", 00:16:51.988 "trtype": "TCP" 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "bdev_nvme_set_hotplug", 00:16:51.988 "params": { 00:16:51.988 "enable": false, 00:16:51.988 "period_us": 100000 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "bdev_enable_histogram", 00:16:51.988 "params": { 00:16:51.988 "enable": true, 00:16:51.988 "name": "nvme0n1" 00:16:51.988 } 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "method": "bdev_wait_for_examine" 00:16:51.988 } 00:16:51.988 ] 00:16:51.988 }, 00:16:51.988 { 00:16:51.988 "subsystem": "nbd", 00:16:51.988 "config": [] 00:16:51.988 } 00:16:51.988 ] 00:16:51.988 }' 00:16:51.988 [2024-11-20 09:10:30.714311] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:51.988 [2024-11-20 09:10:30.714885] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84832 ] 00:16:51.988 [2024-11-20 09:10:30.867819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.247 [2024-11-20 09:10:30.932956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.247 [2024-11-20 09:10:31.115917] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:52.814 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.814 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:52.814 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:52.814 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # jq -r '.[].name' 00:16:53.073 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.073 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:53.332 Running I/O for 1 seconds... 00:16:54.268 4352.00 IOPS, 17.00 MiB/s 00:16:54.268 Latency(us) 00:16:54.268 [2024-11-20T09:10:33.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.268 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:54.268 Verification LBA range: start 0x0 length 0x2000 00:16:54.268 nvme0n1 : 1.03 4362.91 17.04 0.00 0.00 29041.89 9770.82 21448.15 00:16:54.268 [2024-11-20T09:10:33.187Z] =================================================================================================================== 00:16:54.268 [2024-11-20T09:10:33.187Z] Total : 4362.91 17.04 0.00 0.00 29041.89 9770.82 21448.15 00:16:54.268 { 00:16:54.268 "results": [ 00:16:54.268 { 00:16:54.268 "job": "nvme0n1", 00:16:54.268 "core_mask": "0x2", 00:16:54.268 "workload": "verify", 00:16:54.268 "status": "finished", 00:16:54.268 "verify_range": { 00:16:54.268 "start": 0, 00:16:54.268 "length": 8192 00:16:54.268 }, 00:16:54.268 "queue_depth": 128, 00:16:54.268 "io_size": 4096, 00:16:54.268 "runtime": 1.026837, 00:16:54.268 "iops": 4362.912516786988, 00:16:54.268 "mibps": 17.04262701869917, 00:16:54.268 "io_failed": 0, 00:16:54.268 "io_timeout": 0, 00:16:54.268 "avg_latency_us": 29041.89007792208, 00:16:54.268 "min_latency_us": 9770.821818181817, 00:16:54.268 "max_latency_us": 21448.145454545454 00:16:54.268 } 00:16:54.268 ], 00:16:54.268 "core_count": 1 00:16:54.268 } 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # trap - SIGINT SIGTERM EXIT 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # cleanup 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:54.268 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:54.268 nvmf_trace.0 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84832 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84832 ']' 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84832 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84832 00:16:54.527 killing process with pid 84832 00:16:54.527 Received shutdown signal, test time was about 1.000000 seconds 00:16:54.527 00:16:54.527 Latency(us) 00:16:54.527 [2024-11-20T09:10:33.446Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.527 [2024-11-20T09:10:33.446Z] =================================================================================================================== 00:16:54.527 [2024-11-20T09:10:33.446Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84832' 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84832 00:16:54.527 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84832 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:54.787 rmmod nvme_tcp 00:16:54.787 rmmod nvme_fabrics 00:16:54.787 rmmod nvme_keyring 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 84788 ']' 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 84788 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84788 ']' 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84788 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84788 00:16:54.787 killing process with pid 84788 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84788' 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84788 00:16:54.787 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84788 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # continue 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Iqli9VGbfh /tmp/tmp.Zom9XS1URw /tmp/tmp.UyYF3kYxcS 00:16:55.046 ************************************ 00:16:55.046 END TEST nvmf_tls 00:16:55.046 ************************************ 00:16:55.046 00:16:55.046 real 1m26.872s 00:16:55.046 user 2m20.673s 00:16:55.046 sys 0m28.361s 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:55.046 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:55.306 ************************************ 00:16:55.306 START TEST nvmf_fips 00:16:55.306 ************************************ 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:55.306 * Looking for test storage... 00:16:55.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.306 --rc genhtml_branch_coverage=1 00:16:55.306 --rc genhtml_function_coverage=1 00:16:55.306 --rc genhtml_legend=1 00:16:55.306 --rc geninfo_all_blocks=1 00:16:55.306 --rc geninfo_unexecuted_blocks=1 00:16:55.306 00:16:55.306 ' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.306 --rc genhtml_branch_coverage=1 00:16:55.306 --rc genhtml_function_coverage=1 00:16:55.306 --rc genhtml_legend=1 00:16:55.306 --rc geninfo_all_blocks=1 00:16:55.306 --rc geninfo_unexecuted_blocks=1 00:16:55.306 00:16:55.306 ' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.306 --rc genhtml_branch_coverage=1 00:16:55.306 --rc genhtml_function_coverage=1 00:16:55.306 --rc genhtml_legend=1 00:16:55.306 --rc geninfo_all_blocks=1 00:16:55.306 --rc geninfo_unexecuted_blocks=1 00:16:55.306 00:16:55.306 ' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:55.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:55.306 --rc genhtml_branch_coverage=1 00:16:55.306 --rc genhtml_function_coverage=1 00:16:55.306 --rc genhtml_legend=1 00:16:55.306 --rc geninfo_all_blocks=1 00:16:55.306 --rc geninfo_unexecuted_blocks=1 00:16:55.306 00:16:55.306 ' 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:55.306 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:55.566 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:55.566 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:55.567 Error setting digest 00:16:55.567 4042412F6E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:55.567 4042412F6E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@280 -- # nvmf_veth_init 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@223 -- # create_target_ns 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # create_main_bridge 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@105 -- # delete_main_bridge 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator0 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:16:55.567 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target0 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0 up 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target0_br 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target0 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:16:55.568 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:16:55.827 10.0.0.1 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:55.827 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:16:55.828 10.0.0.2 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator0 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target0_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up initiator1 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@151 -- # set_up target1 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1 up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # set_up target1_br 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns target1 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772163 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:16:55.828 10.0.0.3 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772164 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:16:55.828 10.0.0.4 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up initiator1 00:16:55.828 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@129 -- # set_up target1_br 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 2 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:16:55.829 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:56.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:56.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:16:56.089 00:16:56.089 --- 10.0.0.1 ping statistics --- 00:16:56.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.089 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:56.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:56.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.026 ms 00:16:56.089 00:16:56.089 --- 10.0.0.2 ping statistics --- 00:16:56.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.089 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:16:56.089 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:56.089 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:16:56.089 00:16:56.089 --- 10.0.0.3 ping statistics --- 00:16:56.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.089 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:16:56.089 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:56.089 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.135 ms 00:16:56.089 00:16:56.089 --- 10.0.0.4 ping statistics --- 00:16:56.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:56.089 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # return 0 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:56.089 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo initiator1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=initiator1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target0 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo target1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=target1 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=85173 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 85173 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85173 ']' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.090 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:56.090 [2024-11-20 09:10:35.000135] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:56.090 [2024-11-20 09:10:35.000439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.349 [2024-11-20 09:10:35.154951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.349 [2024-11-20 09:10:35.213870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.349 [2024-11-20 09:10:35.213931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.349 [2024-11-20 09:10:35.213945] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:56.349 [2024-11-20 09:10:35.213956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:56.349 [2024-11-20 09:10:35.213966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.349 [2024-11-20 09:10:35.214431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.yIe 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.yIe 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.yIe 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.yIe 00:16:57.284 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.543 [2024-11-20 09:10:36.370949] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.543 [2024-11-20 09:10:36.386898] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:57.543 [2024-11-20 09:10:36.387130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.543 malloc0 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85227 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85227 /var/tmp/bdevperf.sock 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85227 ']' 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.543 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.802 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.802 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:57.802 [2024-11-20 09:10:36.544375] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:57.802 [2024-11-20 09:10:36.544467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85227 ] 00:16:57.802 [2024-11-20 09:10:36.698162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.061 [2024-11-20 09:10:36.763603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.628 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:58.628 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:58.628 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.yIe 00:16:58.886 09:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:59.145 [2024-11-20 09:10:37.976160] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.145 TLSTESTn1 00:16:59.404 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:59.404 Running I/O for 10 seconds... 00:17:01.285 4126.00 IOPS, 16.12 MiB/s [2024-11-20T09:10:41.597Z] 4217.00 IOPS, 16.47 MiB/s [2024-11-20T09:10:42.532Z] 4265.00 IOPS, 16.66 MiB/s [2024-11-20T09:10:43.466Z] 4302.00 IOPS, 16.80 MiB/s [2024-11-20T09:10:44.401Z] 4322.00 IOPS, 16.88 MiB/s [2024-11-20T09:10:45.336Z] 4330.00 IOPS, 16.91 MiB/s [2024-11-20T09:10:46.271Z] 4336.43 IOPS, 16.94 MiB/s [2024-11-20T09:10:47.206Z] 4308.12 IOPS, 16.83 MiB/s [2024-11-20T09:10:48.583Z] 4294.44 IOPS, 16.78 MiB/s [2024-11-20T09:10:48.583Z] 4288.40 IOPS, 16.75 MiB/s 00:17:09.664 Latency(us) 00:17:09.664 [2024-11-20T09:10:48.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.664 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:09.664 Verification LBA range: start 0x0 length 0x2000 00:17:09.664 TLSTESTn1 : 10.02 4293.60 16.77 0.00 0.00 29757.43 5987.61 23950.43 00:17:09.664 [2024-11-20T09:10:48.583Z] =================================================================================================================== 00:17:09.664 [2024-11-20T09:10:48.583Z] Total : 4293.60 16.77 0.00 0.00 29757.43 5987.61 23950.43 00:17:09.664 { 00:17:09.664 "results": [ 00:17:09.664 { 00:17:09.664 "job": "TLSTESTn1", 00:17:09.664 "core_mask": "0x4", 00:17:09.664 "workload": "verify", 00:17:09.664 "status": "finished", 00:17:09.664 "verify_range": { 00:17:09.664 "start": 0, 00:17:09.664 "length": 8192 00:17:09.664 }, 00:17:09.664 "queue_depth": 128, 00:17:09.664 "io_size": 4096, 00:17:09.664 "runtime": 10.017226, 00:17:09.664 "iops": 4293.603838028612, 00:17:09.664 "mibps": 16.771889992299265, 00:17:09.664 "io_failed": 0, 00:17:09.664 "io_timeout": 0, 00:17:09.664 "avg_latency_us": 29757.426234406375, 00:17:09.664 "min_latency_us": 5987.607272727273, 00:17:09.664 "max_latency_us": 23950.429090909092 00:17:09.664 } 00:17:09.664 ], 00:17:09.664 "core_count": 1 00:17:09.664 } 00:17:09.664 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:09.664 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:09.664 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:09.664 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:09.665 nvmf_trace.0 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85227 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85227 ']' 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85227 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85227 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85227' 00:17:09.665 killing process with pid 85227 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85227 00:17:09.665 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.665 00:17:09.665 Latency(us) 00:17:09.665 [2024-11-20T09:10:48.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.665 [2024-11-20T09:10:48.584Z] =================================================================================================================== 00:17:09.665 [2024-11-20T09:10:48.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85227 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:09.665 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:09.924 rmmod nvme_tcp 00:17:09.924 rmmod nvme_fabrics 00:17:09.924 rmmod nvme_keyring 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 85173 ']' 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 85173 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85173 ']' 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85173 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85173 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.924 killing process with pid 85173 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85173' 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85173 00:17:09.924 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85173 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:10.184 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # continue 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:17:10.184 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:17:10.185 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:17:10.185 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:10.185 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:17:10.185 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.yIe 00:17:10.446 00:17:10.446 real 0m15.093s 00:17:10.446 user 0m20.849s 00:17:10.446 sys 0m5.912s 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:10.446 ************************************ 00:17:10.446 END TEST nvmf_fips 00:17:10.446 ************************************ 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.446 ************************************ 00:17:10.446 START TEST nvmf_control_msg_list 00:17:10.446 ************************************ 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:10.446 * Looking for test storage... 00:17:10.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:10.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.446 --rc genhtml_branch_coverage=1 00:17:10.446 --rc genhtml_function_coverage=1 00:17:10.446 --rc genhtml_legend=1 00:17:10.446 --rc geninfo_all_blocks=1 00:17:10.446 --rc geninfo_unexecuted_blocks=1 00:17:10.446 00:17:10.446 ' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:10.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.446 --rc genhtml_branch_coverage=1 00:17:10.446 --rc genhtml_function_coverage=1 00:17:10.446 --rc genhtml_legend=1 00:17:10.446 --rc geninfo_all_blocks=1 00:17:10.446 --rc geninfo_unexecuted_blocks=1 00:17:10.446 00:17:10.446 ' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:10.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.446 --rc genhtml_branch_coverage=1 00:17:10.446 --rc genhtml_function_coverage=1 00:17:10.446 --rc genhtml_legend=1 00:17:10.446 --rc geninfo_all_blocks=1 00:17:10.446 --rc geninfo_unexecuted_blocks=1 00:17:10.446 00:17:10.446 ' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:10.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.446 --rc genhtml_branch_coverage=1 00:17:10.446 --rc genhtml_function_coverage=1 00:17:10.446 --rc genhtml_legend=1 00:17:10.446 --rc geninfo_all_blocks=1 00:17:10.446 --rc geninfo_unexecuted_blocks=1 00:17:10.446 00:17:10.446 ' 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.446 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.447 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:10.709 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@223 -- # create_target_ns 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.709 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target0 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:10.710 10.0.0.1 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:10.710 10.0.0.2 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:10.710 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@151 -- # set_up target1 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772163 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:10.711 10.0.0.3 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772164 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:10.711 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:10.711 10.0.0.4 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:10.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:17:10.972 00:17:10.972 --- 10.0.0.1 ping statistics --- 00:17:10.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.972 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:10.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:17:10.972 00:17:10.972 --- 10.0.0.2 ping statistics --- 00:17:10.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.972 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:10.972 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:10.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:10.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:10.973 00:17:10.973 --- 10.0.0.3 ping statistics --- 00:17:10.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.973 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:10.973 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:10.973 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.149 ms 00:17:10.973 00:17:10.973 --- 10.0.0.4 ping statistics --- 00:17:10.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.973 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # return 0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target0 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:10.973 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo target1 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=target1 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=85646 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 85646 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85646 ']' 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.974 09:10:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.232 [2024-11-20 09:10:49.911406] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:11.232 [2024-11-20 09:10:49.911498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.232 [2024-11-20 09:10:50.060001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.232 [2024-11-20 09:10:50.116927] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.232 [2024-11-20 09:10:50.116996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.232 [2024-11-20 09:10:50.117023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.232 [2024-11-20 09:10:50.117033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.232 [2024-11-20 09:10:50.117043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.232 [2024-11-20 09:10:50.117509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.491 [2024-11-20 09:10:50.300716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.491 Malloc0 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:11.491 [2024-11-20 09:10:50.348172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85677 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85678 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85679 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85677 00:17:11.491 09:10:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:11.750 [2024-11-20 09:10:50.532517] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:11.750 [2024-11-20 09:10:50.543073] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:11.750 [2024-11-20 09:10:50.543554] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:12.688 Initializing NVMe Controllers 00:17:12.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:12.688 Initialization complete. Launching workers. 00:17:12.688 ======================================================== 00:17:12.688 Latency(us) 00:17:12.688 Device Information : IOPS MiB/s Average min max 00:17:12.688 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3404.98 13.30 293.38 119.28 680.42 00:17:12.688 ======================================================== 00:17:12.688 Total : 3404.98 13.30 293.38 119.28 680.42 00:17:12.688 00:17:12.688 Initializing NVMe Controllers 00:17:12.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:12.688 Initialization complete. Launching workers. 00:17:12.688 ======================================================== 00:17:12.688 Latency(us) 00:17:12.688 Device Information : IOPS MiB/s Average min max 00:17:12.688 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3347.00 13.07 298.40 157.06 680.38 00:17:12.688 ======================================================== 00:17:12.688 Total : 3347.00 13.07 298.40 157.06 680.38 00:17:12.688 00:17:12.688 Initializing NVMe Controllers 00:17:12.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:12.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:12.688 Initialization complete. Launching workers. 00:17:12.688 ======================================================== 00:17:12.688 Latency(us) 00:17:12.688 Device Information : IOPS MiB/s Average min max 00:17:12.688 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3327.31 13.00 300.19 196.70 760.20 00:17:12.688 ======================================================== 00:17:12.688 Total : 3327.31 13.00 300.19 196.70 760.20 00:17:12.688 00:17:12.688 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85678 00:17:12.688 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85679 00:17:12.688 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:12.688 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:12.688 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:12.688 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:12.947 rmmod nvme_tcp 00:17:12.947 rmmod nvme_fabrics 00:17:12.947 rmmod nvme_keyring 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 85646 ']' 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 85646 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85646 ']' 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85646 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85646 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:12.947 killing process with pid 85646 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85646' 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85646 00:17:12.947 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85646 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:13.207 09:10:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # continue 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:17:13.207 00:17:13.207 real 0m2.960s 00:17:13.207 user 0m4.751s 00:17:13.207 sys 0m1.438s 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.207 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.207 ************************************ 00:17:13.207 END TEST nvmf_control_msg_list 00:17:13.207 ************************************ 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.467 ************************************ 00:17:13.467 START TEST nvmf_wait_for_buf 00:17:13.467 ************************************ 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:13.467 * Looking for test storage... 00:17:13.467 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.467 --rc genhtml_branch_coverage=1 00:17:13.467 --rc genhtml_function_coverage=1 00:17:13.467 --rc genhtml_legend=1 00:17:13.467 --rc geninfo_all_blocks=1 00:17:13.467 --rc geninfo_unexecuted_blocks=1 00:17:13.467 00:17:13.467 ' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.467 --rc genhtml_branch_coverage=1 00:17:13.467 --rc genhtml_function_coverage=1 00:17:13.467 --rc genhtml_legend=1 00:17:13.467 --rc geninfo_all_blocks=1 00:17:13.467 --rc geninfo_unexecuted_blocks=1 00:17:13.467 00:17:13.467 ' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.467 --rc genhtml_branch_coverage=1 00:17:13.467 --rc genhtml_function_coverage=1 00:17:13.467 --rc genhtml_legend=1 00:17:13.467 --rc geninfo_all_blocks=1 00:17:13.467 --rc geninfo_unexecuted_blocks=1 00:17:13.467 00:17:13.467 ' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.467 --rc genhtml_branch_coverage=1 00:17:13.467 --rc genhtml_function_coverage=1 00:17:13.467 --rc genhtml_legend=1 00:17:13.467 --rc geninfo_all_blocks=1 00:17:13.467 --rc geninfo_unexecuted_blocks=1 00:17:13.467 00:17:13.467 ' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.467 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:13.468 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@223 -- # create_target_ns 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:17:13.468 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target0 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:13.728 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:13.729 10.0.0.1 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:13.729 10.0.0.2 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@151 -- # set_up target1 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:13.729 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772163 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:13.730 10.0.0.3 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772164 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:13.730 10.0.0.4 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:13.730 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:13.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:17:13.991 00:17:13.991 --- 10.0.0.1 ping statistics --- 00:17:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.991 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:13.991 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:13.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.992 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:17:13.992 00:17:13.992 --- 10.0.0.2 ping statistics --- 00:17:13.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.992 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:13.992 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.992 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:17:13.992 00:17:13.992 --- 10.0.0.3 ping statistics --- 00:17:13.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.992 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:13.992 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:13.992 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:17:13.992 00:17:13.992 --- 10.0.0.4 ping statistics --- 00:17:13.992 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.992 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # return 0 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator0 00:17:13.992 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo initiator1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target0 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target0 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo target1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=target1 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=85914 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 85914 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85914 ']' 00:17:13.993 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.994 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.994 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.994 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.994 09:10:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.252 [2024-11-20 09:10:52.929210] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:14.252 [2024-11-20 09:10:52.930161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.252 [2024-11-20 09:10:53.075484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.252 [2024-11-20 09:10:53.122431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.252 [2024-11-20 09:10:53.122484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.252 [2024-11-20 09:10:53.122495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.252 [2024-11-20 09:10:53.122508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.252 [2024-11-20 09:10:53.122515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.252 [2024-11-20 09:10:53.122911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 Malloc0 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 [2024-11-20 09:10:53.340357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:14.512 [2024-11-20 09:10:53.364472] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.512 09:10:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:14.771 [2024-11-20 09:10:53.569088] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:16.150 Initializing NVMe Controllers 00:17:16.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:16.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:16.150 Initialization complete. Launching workers. 00:17:16.150 ======================================================== 00:17:16.150 Latency(us) 00:17:16.150 Device Information : IOPS MiB/s Average min max 00:17:16.150 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32248.56 8043.01 64101.03 00:17:16.150 ======================================================== 00:17:16.150 Total : 129.00 16.12 32248.56 8043.01 64101.03 00:17:16.150 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:16.150 09:10:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:17:16.150 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:16.150 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:17:16.150 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:16.150 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:16.150 rmmod nvme_tcp 00:17:16.150 rmmod nvme_fabrics 00:17:16.150 rmmod nvme_keyring 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 85914 ']' 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 85914 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85914 ']' 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85914 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:16.409 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85914 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.410 killing process with pid 85914 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85914' 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85914 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85914 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:16.410 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # continue 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:17:16.669 00:17:16.669 real 0m3.297s 00:17:16.669 user 0m2.739s 00:17:16.669 sys 0m0.812s 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.669 ************************************ 00:17:16.669 END TEST nvmf_wait_for_buf 00:17:16.669 ************************************ 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.669 ************************************ 00:17:16.669 START TEST nvmf_nsid 00:17:16.669 ************************************ 00:17:16.669 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:16.930 * Looking for test storage... 00:17:16.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.930 --rc genhtml_branch_coverage=1 00:17:16.930 --rc genhtml_function_coverage=1 00:17:16.930 --rc genhtml_legend=1 00:17:16.930 --rc geninfo_all_blocks=1 00:17:16.930 --rc geninfo_unexecuted_blocks=1 00:17:16.930 00:17:16.930 ' 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.930 --rc genhtml_branch_coverage=1 00:17:16.930 --rc genhtml_function_coverage=1 00:17:16.930 --rc genhtml_legend=1 00:17:16.930 --rc geninfo_all_blocks=1 00:17:16.930 --rc geninfo_unexecuted_blocks=1 00:17:16.930 00:17:16.930 ' 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.930 --rc genhtml_branch_coverage=1 00:17:16.930 --rc genhtml_function_coverage=1 00:17:16.930 --rc genhtml_legend=1 00:17:16.930 --rc geninfo_all_blocks=1 00:17:16.930 --rc geninfo_unexecuted_blocks=1 00:17:16.930 00:17:16.930 ' 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:16.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.930 --rc genhtml_branch_coverage=1 00:17:16.930 --rc genhtml_function_coverage=1 00:17:16.930 --rc genhtml_legend=1 00:17:16.930 --rc geninfo_all_blocks=1 00:17:16.930 --rc geninfo_unexecuted_blocks=1 00:17:16.930 00:17:16.930 ' 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.930 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:16.931 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@223 -- # create_target_ns 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:16.931 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target0 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:16.932 10.0.0.1 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:16.932 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:17.192 10.0.0.2 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@151 -- # set_up target1 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:17.192 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772163 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:17.193 10.0.0.3 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772164 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:17.193 09:10:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:17.193 10.0.0.4 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:17.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:17:17.193 00:17:17.193 --- 10.0.0.1 ping statistics --- 00:17:17.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.193 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:17.193 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:17.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:17:17.194 00:17:17.194 --- 10.0.0.2 ping statistics --- 00:17:17.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.194 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:17.194 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:17.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:17.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:17:17.454 00:17:17.454 --- 10.0.0.3 ping statistics --- 00:17:17.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.454 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:17.454 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:17.454 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:17:17.454 00:17:17.454 --- 10.0.0.4 ping statistics --- 00:17:17.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.454 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # return 0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:17.454 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target0 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target0 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo target1 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=target1 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=86193 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 86193 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86193 ']' 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.455 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.455 [2024-11-20 09:10:56.294149] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:17.455 [2024-11-20 09:10:56.294242] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.714 [2024-11-20 09:10:56.430537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.714 [2024-11-20 09:10:56.471803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.714 [2024-11-20 09:10:56.471873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.714 [2024-11-20 09:10:56.471884] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.714 [2024-11-20 09:10:56.471892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.714 [2024-11-20 09:10:56.471899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.714 [2024-11-20 09:10:56.472270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.714 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.714 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:17.714 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:17.714 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.714 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=86218 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo initiator0 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=11ab7b9e-0465-490b-9778-4e295174f117 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=15b86347-2b54-4846-a4c6-52321813ace4 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=2e979129-d5fe-4f93-abea-07a87dc6b763 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.974 null0 00:17:17.974 null1 00:17:17.974 null2 00:17:17.974 [2024-11-20 09:10:56.722666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.974 [2024-11-20 09:10:56.729313] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:17.974 [2024-11-20 09:10:56.729413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86218 ] 00:17:17.974 [2024-11-20 09:10:56.746803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 86218 /var/tmp/tgt2.sock 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 86218 ']' 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:17.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.974 09:10:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:17.974 [2024-11-20 09:10:56.885298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.233 [2024-11-20 09:10:56.969775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.491 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.491 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:18.491 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:19.060 [2024-11-20 09:10:57.739203] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.060 [2024-11-20 09:10:57.755313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:19.060 nvme0n1 nvme0n2 00:17:19.060 nvme1n1 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:19.060 09:10:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 11ab7b9e-0465-490b-9778-4e295174f117 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:20.438 09:10:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=11ab7b9e0465490b97784e295174f117 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 11AB7B9E0465490B97784E295174F117 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 11AB7B9E0465490B97784E295174F117 == \1\1\A\B\7\B\9\E\0\4\6\5\4\9\0\B\9\7\7\8\4\E\2\9\5\1\7\4\F\1\1\7 ]] 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 15b86347-2b54-4846-a4c6-52321813ace4 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=15b863472b544846a4c652321813ace4 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 15B863472B544846A4C652321813ACE4 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 15B863472B544846A4C652321813ACE4 == \1\5\B\8\6\3\4\7\2\B\5\4\4\8\4\6\A\4\C\6\5\2\3\2\1\8\1\3\A\C\E\4 ]] 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 2e979129-d5fe-4f93-abea-07a87dc6b763 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2e979129d5fe4f93abea07a87dc6b763 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2E979129D5FE4F93ABEA07A87DC6B763 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 2E979129D5FE4F93ABEA07A87DC6B763 == \2\E\9\7\9\1\2\9\D\5\F\E\4\F\9\3\A\B\E\A\0\7\A\8\7\D\C\6\B\7\6\3 ]] 00:17:20.438 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 86218 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86218 ']' 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86218 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86218 00:17:20.698 killing process with pid 86218 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86218' 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86218 00:17:20.698 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86218 00:17:21.266 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:21.266 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:21.266 09:10:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:21.266 rmmod nvme_tcp 00:17:21.266 rmmod nvme_fabrics 00:17:21.266 rmmod nvme_keyring 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 86193 ']' 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 86193 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 86193 ']' 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 86193 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86193 00:17:21.266 killing process with pid 86193 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86193' 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 86193 00:17:21.266 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 86193 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:21.525 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # continue 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:17:21.784 ************************************ 00:17:21.784 END TEST nvmf_nsid 00:17:21.784 ************************************ 00:17:21.784 00:17:21.784 real 0m4.977s 00:17:21.784 user 0m7.776s 00:17:21.784 sys 0m1.533s 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:21.784 ************************************ 00:17:21.784 END TEST nvmf_target_extra 00:17:21.784 ************************************ 00:17:21.784 00:17:21.784 real 7m31.725s 00:17:21.784 user 18m13.717s 00:17:21.784 sys 1m30.495s 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:21.784 09:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:21.784 09:11:00 nvmf_tcp -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:21.784 09:11:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:21.784 09:11:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:21.784 09:11:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.784 ************************************ 00:17:21.784 START TEST nvmf_host 00:17:21.784 ************************************ 00:17:21.784 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:21.784 * Looking for test storage... 00:17:21.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:21.784 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:21.784 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:21.784 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.045 --rc genhtml_branch_coverage=1 00:17:22.045 --rc genhtml_function_coverage=1 00:17:22.045 --rc genhtml_legend=1 00:17:22.045 --rc geninfo_all_blocks=1 00:17:22.045 --rc geninfo_unexecuted_blocks=1 00:17:22.045 00:17:22.045 ' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.045 --rc genhtml_branch_coverage=1 00:17:22.045 --rc genhtml_function_coverage=1 00:17:22.045 --rc genhtml_legend=1 00:17:22.045 --rc geninfo_all_blocks=1 00:17:22.045 --rc geninfo_unexecuted_blocks=1 00:17:22.045 00:17:22.045 ' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.045 --rc genhtml_branch_coverage=1 00:17:22.045 --rc genhtml_function_coverage=1 00:17:22.045 --rc genhtml_legend=1 00:17:22.045 --rc geninfo_all_blocks=1 00:17:22.045 --rc geninfo_unexecuted_blocks=1 00:17:22.045 00:17:22.045 ' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.045 --rc genhtml_branch_coverage=1 00:17:22.045 --rc genhtml_function_coverage=1 00:17:22.045 --rc genhtml_legend=1 00:17:22.045 --rc geninfo_all_blocks=1 00:17:22.045 --rc geninfo_unexecuted_blocks=1 00:17:22.045 00:17:22.045 ' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.045 09:11:00 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:22.046 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:22.046 ************************************ 00:17:22.046 START TEST nvmf_aer 00:17:22.046 ************************************ 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:22.046 * Looking for test storage... 00:17:22.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:17:22.046 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.307 --rc genhtml_branch_coverage=1 00:17:22.307 --rc genhtml_function_coverage=1 00:17:22.307 --rc genhtml_legend=1 00:17:22.307 --rc geninfo_all_blocks=1 00:17:22.307 --rc geninfo_unexecuted_blocks=1 00:17:22.307 00:17:22.307 ' 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.307 --rc genhtml_branch_coverage=1 00:17:22.307 --rc genhtml_function_coverage=1 00:17:22.307 --rc genhtml_legend=1 00:17:22.307 --rc geninfo_all_blocks=1 00:17:22.307 --rc geninfo_unexecuted_blocks=1 00:17:22.307 00:17:22.307 ' 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.307 --rc genhtml_branch_coverage=1 00:17:22.307 --rc genhtml_function_coverage=1 00:17:22.307 --rc genhtml_legend=1 00:17:22.307 --rc geninfo_all_blocks=1 00:17:22.307 --rc geninfo_unexecuted_blocks=1 00:17:22.307 00:17:22.307 ' 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:22.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.307 --rc genhtml_branch_coverage=1 00:17:22.307 --rc genhtml_function_coverage=1 00:17:22.307 --rc genhtml_legend=1 00:17:22.307 --rc geninfo_all_blocks=1 00:17:22.307 --rc geninfo_unexecuted_blocks=1 00:17:22.307 00:17:22.307 ' 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.307 09:11:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:22.307 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:22.308 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@223 -- # create_target_ns 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # return 0 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up target0 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:22.308 10.0.0.1 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:22.308 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:22.309 10.0.0.2 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:22.309 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@151 -- # set_up target1 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772163 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:22.581 10.0.0.3 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772164 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:22.581 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:22.582 10.0.0.4 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:22.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:17:22.582 00:17:22.582 --- 10.0.0.1 ping statistics --- 00:17:22.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.582 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target0 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:22.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:22.582 00:17:22.582 --- 10.0.0.2 ping statistics --- 00:17:22.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.582 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:22.582 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:22.582 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:22.583 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:17:22.583 00:17:22.583 --- 10.0.0.3 ping statistics --- 00:17:22.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.583 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:22.583 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:22.583 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:17:22.583 00:17:22.583 --- 10.0.0.4 ping statistics --- 00:17:22.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.583 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # return 0 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator0 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo initiator1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.583 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target0 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target0 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo target1 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=target1 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=86605 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 86605 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 86605 ']' 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.853 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:22.853 [2024-11-20 09:11:01.618558] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:22.853 [2024-11-20 09:11:01.618703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:23.112 [2024-11-20 09:11:01.777027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:23.112 [2024-11-20 09:11:01.842297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:23.112 [2024-11-20 09:11:01.842373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:23.112 [2024-11-20 09:11:01.842388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:23.112 [2024-11-20 09:11:01.842398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:23.112 [2024-11-20 09:11:01.842408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:23.112 [2024-11-20 09:11:01.843830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.112 [2024-11-20 09:11:01.843898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:23.112 [2024-11-20 09:11:01.844041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:23.112 [2024-11-20 09:11:01.844047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.112 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.112 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:17:23.112 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:23.112 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:23.112 09:11:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.112 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.112 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:23.112 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.112 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.372 [2024-11-20 09:11:02.035484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.372 Malloc0 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.372 [2024-11-20 09:11:02.100792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.372 [ 00:17:23.372 { 00:17:23.372 "allow_any_host": true, 00:17:23.372 "hosts": [], 00:17:23.372 "listen_addresses": [], 00:17:23.372 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:23.372 "subtype": "Discovery" 00:17:23.372 }, 00:17:23.372 { 00:17:23.372 "allow_any_host": true, 00:17:23.372 "hosts": [], 00:17:23.372 "listen_addresses": [ 00:17:23.372 { 00:17:23.372 "adrfam": "IPv4", 00:17:23.372 "traddr": "10.0.0.2", 00:17:23.372 "trsvcid": "4420", 00:17:23.372 "trtype": "TCP" 00:17:23.372 } 00:17:23.372 ], 00:17:23.372 "max_cntlid": 65519, 00:17:23.372 "max_namespaces": 2, 00:17:23.372 "min_cntlid": 1, 00:17:23.372 "model_number": "SPDK bdev Controller", 00:17:23.372 "namespaces": [ 00:17:23.372 { 00:17:23.372 "bdev_name": "Malloc0", 00:17:23.372 "name": "Malloc0", 00:17:23.372 "nguid": "7D4F5A79D41A4090A7A59930FB2CAA05", 00:17:23.372 "nsid": 1, 00:17:23.372 "uuid": "7d4f5a79-d41a-4090-a7a5-9930fb2caa05" 00:17:23.372 } 00:17:23.372 ], 00:17:23.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.372 "serial_number": "SPDK00000000000001", 00:17:23.372 "subtype": "NVMe" 00:17:23.372 } 00:17:23.372 ] 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86647 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:17:23.372 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 Malloc1 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 [ 00:17:23.632 { 00:17:23.632 "allow_any_host": true, 00:17:23.632 "hosts": [], 00:17:23.632 "listen_addresses": [], 00:17:23.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:23.632 "subtype": "Discovery" 00:17:23.632 }, 00:17:23.632 { 00:17:23.632 "allow_any_host": true, 00:17:23.632 "hosts": [], 00:17:23.632 "listen_addresses": [ 00:17:23.632 { 00:17:23.632 "adrfam": "IPv4", 00:17:23.632 "traddr": "10.0.0.2", 00:17:23.632 Asynchronous Event Request test 00:17:23.632 Attaching to 10.0.0.2 00:17:23.632 Attached to 10.0.0.2 00:17:23.632 Registering asynchronous event callbacks... 00:17:23.632 Starting namespace attribute notice tests for all controllers... 00:17:23.632 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:23.632 aer_cb - Changed Namespace 00:17:23.632 Cleaning up... 00:17:23.632 "trsvcid": "4420", 00:17:23.632 "trtype": "TCP" 00:17:23.632 } 00:17:23.632 ], 00:17:23.632 "max_cntlid": 65519, 00:17:23.632 "max_namespaces": 2, 00:17:23.632 "min_cntlid": 1, 00:17:23.632 "model_number": "SPDK bdev Controller", 00:17:23.632 "namespaces": [ 00:17:23.632 { 00:17:23.632 "bdev_name": "Malloc0", 00:17:23.632 "name": "Malloc0", 00:17:23.632 "nguid": "7D4F5A79D41A4090A7A59930FB2CAA05", 00:17:23.632 "nsid": 1, 00:17:23.632 "uuid": "7d4f5a79-d41a-4090-a7a5-9930fb2caa05" 00:17:23.632 }, 00:17:23.632 { 00:17:23.632 "bdev_name": "Malloc1", 00:17:23.632 "name": "Malloc1", 00:17:23.632 "nguid": "2CE778BB4F4A488392787C0C77B970BE", 00:17:23.632 "nsid": 2, 00:17:23.632 "uuid": "2ce778bb-4f4a-4883-9278-7c0c77b970be" 00:17:23.632 } 00:17:23.632 ], 00:17:23.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:23.632 "serial_number": "SPDK00000000000001", 00:17:23.632 "subtype": "NVMe" 00:17:23.632 } 00:17:23.632 ] 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86647 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:23.632 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:17:23.891 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:23.891 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:17:23.891 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:23.891 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:23.891 rmmod nvme_tcp 00:17:23.891 rmmod nvme_fabrics 00:17:23.891 rmmod nvme_keyring 00:17:23.891 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:23.891 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:17:23.891 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 86605 ']' 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 86605 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 86605 ']' 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 86605 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86605 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86605' 00:17:23.892 killing process with pid 86605 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 86605 00:17:23.892 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 86605 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@254 -- # local dev 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:24.150 09:11:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:24.150 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # continue 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # continue 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@274 -- # iptr 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-save 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-restore 00:17:24.410 00:17:24.410 real 0m2.275s 00:17:24.410 user 0m4.404s 00:17:24.410 sys 0m0.863s 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.410 ************************************ 00:17:24.410 END TEST nvmf_aer 00:17:24.410 ************************************ 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:24.410 ************************************ 00:17:24.410 START TEST nvmf_async_init 00:17:24.410 ************************************ 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:24.410 * Looking for test storage... 00:17:24.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.410 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:17:24.670 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:24.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.671 --rc genhtml_branch_coverage=1 00:17:24.671 --rc genhtml_function_coverage=1 00:17:24.671 --rc genhtml_legend=1 00:17:24.671 --rc geninfo_all_blocks=1 00:17:24.671 --rc geninfo_unexecuted_blocks=1 00:17:24.671 00:17:24.671 ' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:24.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.671 --rc genhtml_branch_coverage=1 00:17:24.671 --rc genhtml_function_coverage=1 00:17:24.671 --rc genhtml_legend=1 00:17:24.671 --rc geninfo_all_blocks=1 00:17:24.671 --rc geninfo_unexecuted_blocks=1 00:17:24.671 00:17:24.671 ' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:24.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.671 --rc genhtml_branch_coverage=1 00:17:24.671 --rc genhtml_function_coverage=1 00:17:24.671 --rc genhtml_legend=1 00:17:24.671 --rc geninfo_all_blocks=1 00:17:24.671 --rc geninfo_unexecuted_blocks=1 00:17:24.671 00:17:24.671 ' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:24.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.671 --rc genhtml_branch_coverage=1 00:17:24.671 --rc genhtml_function_coverage=1 00:17:24.671 --rc genhtml_legend=1 00:17:24.671 --rc geninfo_all_blocks=1 00:17:24.671 --rc geninfo_unexecuted_blocks=1 00:17:24.671 00:17:24.671 ' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:24.671 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5338881777514dcb81e59dd446be95ef 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@223 -- # create_target_ns 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:24.671 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # return 0 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up target0 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:24.672 10.0.0.1 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:24.672 10.0.0.2 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:24.672 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@151 -- # set_up target1 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:24.673 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772163 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:24.934 10.0.0.3 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772164 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:24.934 10.0.0.4 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:24.934 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:24.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:17:24.935 00:17:24.935 --- 10.0.0.1 ping statistics --- 00:17:24.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.935 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target0 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:24.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:17:24.935 00:17:24.935 --- 10.0.0.2 ping statistics --- 00:17:24.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.935 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:24.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:24.935 00:17:24.935 --- 10.0.0.3 ping statistics --- 00:17:24.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.935 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target1 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:24.935 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:24.936 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:24.936 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:17:24.936 00:17:24.936 --- 10.0.0.4 ping statistics --- 00:17:24.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.936 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # return 0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo initiator1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target0 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo target1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=target1 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:24.936 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:25.195 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=86871 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 86871 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 86871 ']' 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.196 09:11:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.196 [2024-11-20 09:11:03.953196] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:25.196 [2024-11-20 09:11:03.953312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.196 [2024-11-20 09:11:04.104307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.454 [2024-11-20 09:11:04.159629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.454 [2024-11-20 09:11:04.159684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.454 [2024-11-20 09:11:04.159712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.454 [2024-11-20 09:11:04.159722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.454 [2024-11-20 09:11:04.159732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.454 [2024-11-20 09:11:04.160180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.454 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.455 [2024-11-20 09:11:04.352067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.455 null0 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.455 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5338881777514dcb81e59dd446be95ef 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.714 [2024-11-20 09:11:04.392215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.714 nvme0n1 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.714 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.973 [ 00:17:25.973 { 00:17:25.973 "aliases": [ 00:17:25.973 "53388817-7751-4dcb-81e5-9dd446be95ef" 00:17:25.973 ], 00:17:25.973 "assigned_rate_limits": { 00:17:25.973 "r_mbytes_per_sec": 0, 00:17:25.973 "rw_ios_per_sec": 0, 00:17:25.973 "rw_mbytes_per_sec": 0, 00:17:25.973 "w_mbytes_per_sec": 0 00:17:25.973 }, 00:17:25.973 "block_size": 512, 00:17:25.973 "claimed": false, 00:17:25.973 "driver_specific": { 00:17:25.973 "mp_policy": "active_passive", 00:17:25.973 "nvme": [ 00:17:25.973 { 00:17:25.973 "ctrlr_data": { 00:17:25.973 "ana_reporting": false, 00:17:25.973 "cntlid": 1, 00:17:25.973 "firmware_revision": "25.01", 00:17:25.973 "model_number": "SPDK bdev Controller", 00:17:25.973 "multi_ctrlr": true, 00:17:25.973 "oacs": { 00:17:25.974 "firmware": 0, 00:17:25.974 "format": 0, 00:17:25.974 "ns_manage": 0, 00:17:25.974 "security": 0 00:17:25.974 }, 00:17:25.974 "serial_number": "00000000000000000000", 00:17:25.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.974 "vendor_id": "0x8086" 00:17:25.974 }, 00:17:25.974 "ns_data": { 00:17:25.974 "can_share": true, 00:17:25.974 "id": 1 00:17:25.974 }, 00:17:25.974 "trid": { 00:17:25.974 "adrfam": "IPv4", 00:17:25.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.974 "traddr": "10.0.0.2", 00:17:25.974 "trsvcid": "4420", 00:17:25.974 "trtype": "TCP" 00:17:25.974 }, 00:17:25.974 "vs": { 00:17:25.974 "nvme_version": "1.3" 00:17:25.974 } 00:17:25.974 } 00:17:25.974 ] 00:17:25.974 }, 00:17:25.974 "memory_domains": [ 00:17:25.974 { 00:17:25.974 "dma_device_id": "system", 00:17:25.974 "dma_device_type": 1 00:17:25.974 } 00:17:25.974 ], 00:17:25.974 "name": "nvme0n1", 00:17:25.974 "num_blocks": 2097152, 00:17:25.974 "numa_id": -1, 00:17:25.974 "product_name": "NVMe disk", 00:17:25.974 "supported_io_types": { 00:17:25.974 "abort": true, 00:17:25.974 "compare": true, 00:17:25.974 "compare_and_write": true, 00:17:25.974 "copy": true, 00:17:25.974 "flush": true, 00:17:25.974 "get_zone_info": false, 00:17:25.974 "nvme_admin": true, 00:17:25.974 "nvme_io": true, 00:17:25.974 "nvme_io_md": false, 00:17:25.974 "nvme_iov_md": false, 00:17:25.974 "read": true, 00:17:25.974 "reset": true, 00:17:25.974 "seek_data": false, 00:17:25.974 "seek_hole": false, 00:17:25.974 "unmap": false, 00:17:25.974 "write": true, 00:17:25.974 "write_zeroes": true, 00:17:25.974 "zcopy": false, 00:17:25.974 "zone_append": false, 00:17:25.974 "zone_management": false 00:17:25.974 }, 00:17:25.974 "uuid": "53388817-7751-4dcb-81e5-9dd446be95ef", 00:17:25.974 "zoned": false 00:17:25.974 } 00:17:25.974 ] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 [2024-11-20 09:11:04.662048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:25.974 [2024-11-20 09:11:04.662145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf91080 (9): Bad file descriptor 00:17:25.974 [2024-11-20 09:11:04.793950] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 [ 00:17:25.974 { 00:17:25.974 "aliases": [ 00:17:25.974 "53388817-7751-4dcb-81e5-9dd446be95ef" 00:17:25.974 ], 00:17:25.974 "assigned_rate_limits": { 00:17:25.974 "r_mbytes_per_sec": 0, 00:17:25.974 "rw_ios_per_sec": 0, 00:17:25.974 "rw_mbytes_per_sec": 0, 00:17:25.974 "w_mbytes_per_sec": 0 00:17:25.974 }, 00:17:25.974 "block_size": 512, 00:17:25.974 "claimed": false, 00:17:25.974 "driver_specific": { 00:17:25.974 "mp_policy": "active_passive", 00:17:25.974 "nvme": [ 00:17:25.974 { 00:17:25.974 "ctrlr_data": { 00:17:25.974 "ana_reporting": false, 00:17:25.974 "cntlid": 2, 00:17:25.974 "firmware_revision": "25.01", 00:17:25.974 "model_number": "SPDK bdev Controller", 00:17:25.974 "multi_ctrlr": true, 00:17:25.974 "oacs": { 00:17:25.974 "firmware": 0, 00:17:25.974 "format": 0, 00:17:25.974 "ns_manage": 0, 00:17:25.974 "security": 0 00:17:25.974 }, 00:17:25.974 "serial_number": "00000000000000000000", 00:17:25.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.974 "vendor_id": "0x8086" 00:17:25.974 }, 00:17:25.974 "ns_data": { 00:17:25.974 "can_share": true, 00:17:25.974 "id": 1 00:17:25.974 }, 00:17:25.974 "trid": { 00:17:25.974 "adrfam": "IPv4", 00:17:25.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.974 "traddr": "10.0.0.2", 00:17:25.974 "trsvcid": "4420", 00:17:25.974 "trtype": "TCP" 00:17:25.974 }, 00:17:25.974 "vs": { 00:17:25.974 "nvme_version": "1.3" 00:17:25.974 } 00:17:25.974 } 00:17:25.974 ] 00:17:25.974 }, 00:17:25.974 "memory_domains": [ 00:17:25.974 { 00:17:25.974 "dma_device_id": "system", 00:17:25.974 "dma_device_type": 1 00:17:25.974 } 00:17:25.974 ], 00:17:25.974 "name": "nvme0n1", 00:17:25.974 "num_blocks": 2097152, 00:17:25.974 "numa_id": -1, 00:17:25.974 "product_name": "NVMe disk", 00:17:25.974 "supported_io_types": { 00:17:25.974 "abort": true, 00:17:25.974 "compare": true, 00:17:25.974 "compare_and_write": true, 00:17:25.974 "copy": true, 00:17:25.974 "flush": true, 00:17:25.974 "get_zone_info": false, 00:17:25.974 "nvme_admin": true, 00:17:25.974 "nvme_io": true, 00:17:25.974 "nvme_io_md": false, 00:17:25.974 "nvme_iov_md": false, 00:17:25.974 "read": true, 00:17:25.974 "reset": true, 00:17:25.974 "seek_data": false, 00:17:25.974 "seek_hole": false, 00:17:25.974 "unmap": false, 00:17:25.974 "write": true, 00:17:25.974 "write_zeroes": true, 00:17:25.974 "zcopy": false, 00:17:25.974 "zone_append": false, 00:17:25.974 "zone_management": false 00:17:25.974 }, 00:17:25.974 "uuid": "53388817-7751-4dcb-81e5-9dd446be95ef", 00:17:25.974 "zoned": false 00:17:25.974 } 00:17:25.974 ] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.HmKRM1VU4w 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.HmKRM1VU4w 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.HmKRM1VU4w 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 [2024-11-20 09:11:04.874171] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:25.974 [2024-11-20 09:11:04.874349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.974 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.974 [2024-11-20 09:11:04.890182] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:26.234 nvme0n1 00:17:26.234 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.234 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:26.234 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.234 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:26.234 [ 00:17:26.234 { 00:17:26.234 "aliases": [ 00:17:26.234 "53388817-7751-4dcb-81e5-9dd446be95ef" 00:17:26.234 ], 00:17:26.234 "assigned_rate_limits": { 00:17:26.234 "r_mbytes_per_sec": 0, 00:17:26.234 "rw_ios_per_sec": 0, 00:17:26.234 "rw_mbytes_per_sec": 0, 00:17:26.234 "w_mbytes_per_sec": 0 00:17:26.234 }, 00:17:26.234 "block_size": 512, 00:17:26.234 "claimed": false, 00:17:26.234 "driver_specific": { 00:17:26.234 "mp_policy": "active_passive", 00:17:26.234 "nvme": [ 00:17:26.234 { 00:17:26.234 "ctrlr_data": { 00:17:26.234 "ana_reporting": false, 00:17:26.234 "cntlid": 3, 00:17:26.234 "firmware_revision": "25.01", 00:17:26.234 "model_number": "SPDK bdev Controller", 00:17:26.234 "multi_ctrlr": true, 00:17:26.234 "oacs": { 00:17:26.234 "firmware": 0, 00:17:26.234 "format": 0, 00:17:26.234 "ns_manage": 0, 00:17:26.234 "security": 0 00:17:26.234 }, 00:17:26.234 "serial_number": "00000000000000000000", 00:17:26.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:26.234 "vendor_id": "0x8086" 00:17:26.234 }, 00:17:26.234 "ns_data": { 00:17:26.234 "can_share": true, 00:17:26.234 "id": 1 00:17:26.234 }, 00:17:26.234 "trid": { 00:17:26.234 "adrfam": "IPv4", 00:17:26.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:26.234 "traddr": "10.0.0.2", 00:17:26.234 "trsvcid": "4421", 00:17:26.234 "trtype": "TCP" 00:17:26.234 }, 00:17:26.234 "vs": { 00:17:26.234 "nvme_version": "1.3" 00:17:26.234 } 00:17:26.234 } 00:17:26.234 ] 00:17:26.234 }, 00:17:26.234 "memory_domains": [ 00:17:26.234 { 00:17:26.234 "dma_device_id": "system", 00:17:26.234 "dma_device_type": 1 00:17:26.234 } 00:17:26.234 ], 00:17:26.234 "name": "nvme0n1", 00:17:26.235 "num_blocks": 2097152, 00:17:26.235 "numa_id": -1, 00:17:26.235 "product_name": "NVMe disk", 00:17:26.235 "supported_io_types": { 00:17:26.235 "abort": true, 00:17:26.235 "compare": true, 00:17:26.235 "compare_and_write": true, 00:17:26.235 "copy": true, 00:17:26.235 "flush": true, 00:17:26.235 "get_zone_info": false, 00:17:26.235 "nvme_admin": true, 00:17:26.235 "nvme_io": true, 00:17:26.235 "nvme_io_md": false, 00:17:26.235 "nvme_iov_md": false, 00:17:26.235 "read": true, 00:17:26.235 "reset": true, 00:17:26.235 "seek_data": false, 00:17:26.235 "seek_hole": false, 00:17:26.235 "unmap": false, 00:17:26.235 "write": true, 00:17:26.235 "write_zeroes": true, 00:17:26.235 "zcopy": false, 00:17:26.235 "zone_append": false, 00:17:26.235 "zone_management": false 00:17:26.235 }, 00:17:26.235 "uuid": "53388817-7751-4dcb-81e5-9dd446be95ef", 00:17:26.235 "zoned": false 00:17:26.235 } 00:17:26.235 ] 00:17:26.235 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.235 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:26.235 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.235 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:26.235 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.235 09:11:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.HmKRM1VU4w 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:26.235 rmmod nvme_tcp 00:17:26.235 rmmod nvme_fabrics 00:17:26.235 rmmod nvme_keyring 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 86871 ']' 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 86871 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 86871 ']' 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 86871 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.235 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86871 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86871' 00:17:26.494 killing process with pid 86871 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 86871 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 86871 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@254 -- # local dev 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:26.494 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:26.495 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:26.495 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:26.495 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:26.495 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:26.495 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:26.495 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # continue 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # continue 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@274 -- # iptr 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-save 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-restore 00:17:26.754 00:17:26.754 real 0m2.399s 00:17:26.754 user 0m1.882s 00:17:26.754 sys 0m0.782s 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:26.754 ************************************ 00:17:26.754 END TEST nvmf_async_init 00:17:26.754 ************************************ 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:26.754 ************************************ 00:17:26.754 START TEST nvmf_identify 00:17:26.754 ************************************ 00:17:26.754 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:27.014 * Looking for test storage... 00:17:27.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.014 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.015 --rc genhtml_branch_coverage=1 00:17:27.015 --rc genhtml_function_coverage=1 00:17:27.015 --rc genhtml_legend=1 00:17:27.015 --rc geninfo_all_blocks=1 00:17:27.015 --rc geninfo_unexecuted_blocks=1 00:17:27.015 00:17:27.015 ' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.015 --rc genhtml_branch_coverage=1 00:17:27.015 --rc genhtml_function_coverage=1 00:17:27.015 --rc genhtml_legend=1 00:17:27.015 --rc geninfo_all_blocks=1 00:17:27.015 --rc geninfo_unexecuted_blocks=1 00:17:27.015 00:17:27.015 ' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.015 --rc genhtml_branch_coverage=1 00:17:27.015 --rc genhtml_function_coverage=1 00:17:27.015 --rc genhtml_legend=1 00:17:27.015 --rc geninfo_all_blocks=1 00:17:27.015 --rc geninfo_unexecuted_blocks=1 00:17:27.015 00:17:27.015 ' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:27.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.015 --rc genhtml_branch_coverage=1 00:17:27.015 --rc genhtml_function_coverage=1 00:17:27.015 --rc genhtml_legend=1 00:17:27.015 --rc geninfo_all_blocks=1 00:17:27.015 --rc geninfo_unexecuted_blocks=1 00:17:27.015 00:17:27.015 ' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:27.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@223 -- # create_target_ns 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:27.015 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target0 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:27.016 10.0.0.1 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:27.016 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:27.276 10.0.0.2 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:27.276 09:11:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:27.276 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@151 -- # set_up target1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772163 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:27.277 10.0.0.3 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772164 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:27.277 10.0.0.4 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:27.277 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:27.278 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:27.278 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:17:27.278 00:17:27.278 --- 10.0.0.1 ping statistics --- 00:17:27.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.278 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:27.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:27.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:17:27.278 00:17:27.278 --- 10.0.0.2 ping statistics --- 00:17:27.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.278 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:27.278 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:27.538 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:27.538 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:17:27.538 00:17:27.538 --- 10.0.0.3 ping statistics --- 00:17:27.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.538 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:27.538 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:27.538 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:17:27.538 00:17:27.538 --- 10.0.0.4 ping statistics --- 00:17:27.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:27.538 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # return 0 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator0 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:27.538 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target0 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target0 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo target1 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=target1 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87134 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87134 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 87134 ']' 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.539 09:11:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:27.539 [2024-11-20 09:11:06.394350] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:27.539 [2024-11-20 09:11:06.394478] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.798 [2024-11-20 09:11:06.551739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:27.798 [2024-11-20 09:11:06.623851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.798 [2024-11-20 09:11:06.623940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.798 [2024-11-20 09:11:06.623968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.798 [2024-11-20 09:11:06.623979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.798 [2024-11-20 09:11:06.623988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.798 [2024-11-20 09:11:06.625455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.798 [2024-11-20 09:11:06.625664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.798 [2024-11-20 09:11:06.625797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.798 [2024-11-20 09:11:06.625795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 [2024-11-20 09:11:07.419720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 Malloc0 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 [2024-11-20 09:11:07.530142] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:28.734 [ 00:17:28.734 { 00:17:28.734 "allow_any_host": true, 00:17:28.734 "hosts": [], 00:17:28.734 "listen_addresses": [ 00:17:28.734 { 00:17:28.734 "adrfam": "IPv4", 00:17:28.734 "traddr": "10.0.0.2", 00:17:28.734 "trsvcid": "4420", 00:17:28.734 "trtype": "TCP" 00:17:28.734 } 00:17:28.734 ], 00:17:28.734 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:28.734 "subtype": "Discovery" 00:17:28.734 }, 00:17:28.734 { 00:17:28.734 "allow_any_host": true, 00:17:28.734 "hosts": [], 00:17:28.734 "listen_addresses": [ 00:17:28.734 { 00:17:28.734 "adrfam": "IPv4", 00:17:28.734 "traddr": "10.0.0.2", 00:17:28.734 "trsvcid": "4420", 00:17:28.734 "trtype": "TCP" 00:17:28.734 } 00:17:28.734 ], 00:17:28.734 "max_cntlid": 65519, 00:17:28.734 "max_namespaces": 32, 00:17:28.734 "min_cntlid": 1, 00:17:28.734 "model_number": "SPDK bdev Controller", 00:17:28.734 "namespaces": [ 00:17:28.734 { 00:17:28.734 "bdev_name": "Malloc0", 00:17:28.734 "eui64": "ABCDEF0123456789", 00:17:28.734 "name": "Malloc0", 00:17:28.734 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:28.734 "nsid": 1, 00:17:28.734 "uuid": "8cf7c71b-f318-42f5-8a16-1b07794f38ae" 00:17:28.734 } 00:17:28.734 ], 00:17:28.734 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.734 "serial_number": "SPDK00000000000001", 00:17:28.734 "subtype": "NVMe" 00:17:28.734 } 00:17:28.734 ] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.734 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:28.734 [2024-11-20 09:11:07.585700] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:28.734 [2024-11-20 09:11:07.585761] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87187 ] 00:17:28.995 [2024-11-20 09:11:07.743185] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:28.995 [2024-11-20 09:11:07.743285] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:28.995 [2024-11-20 09:11:07.743292] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:28.995 [2024-11-20 09:11:07.743306] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:28.995 [2024-11-20 09:11:07.743318] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:28.995 [2024-11-20 09:11:07.743742] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:28.995 [2024-11-20 09:11:07.743859] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x527d90 0 00:17:28.995 [2024-11-20 09:11:07.748842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:28.995 [2024-11-20 09:11:07.748898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:28.995 [2024-11-20 09:11:07.748921] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:28.995 [2024-11-20 09:11:07.748925] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:28.995 [2024-11-20 09:11:07.748959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.995 [2024-11-20 09:11:07.748967] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.995 [2024-11-20 09:11:07.748971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.995 [2024-11-20 09:11:07.748987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:28.995 [2024-11-20 09:11:07.749023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.995 [2024-11-20 09:11:07.756871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.995 [2024-11-20 09:11:07.756894] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.995 [2024-11-20 09:11:07.756899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.995 [2024-11-20 09:11:07.756904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.995 [2024-11-20 09:11:07.756917] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:28.995 [2024-11-20 09:11:07.756926] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:28.995 [2024-11-20 09:11:07.756932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:28.995 [2024-11-20 09:11:07.756949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.995 [2024-11-20 09:11:07.756954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.995 [2024-11-20 09:11:07.756958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.756967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.996 [2024-11-20 09:11:07.756997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.757106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.757114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.757118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.996 [2024-11-20 09:11:07.757140] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:28.996 [2024-11-20 09:11:07.757148] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:28.996 [2024-11-20 09:11:07.757157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.757175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.996 [2024-11-20 09:11:07.757197] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.757300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.757308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.757312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.996 [2024-11-20 09:11:07.757323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:28.996 [2024-11-20 09:11:07.757332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:28.996 [2024-11-20 09:11:07.757340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757345] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.757356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.996 [2024-11-20 09:11:07.757377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.757440] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.757447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.757451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.996 [2024-11-20 09:11:07.757462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:28.996 [2024-11-20 09:11:07.757473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757478] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.757490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.996 [2024-11-20 09:11:07.757509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.757575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.757582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.757585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.996 [2024-11-20 09:11:07.757595] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:28.996 [2024-11-20 09:11:07.757601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:28.996 [2024-11-20 09:11:07.757609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:28.996 [2024-11-20 09:11:07.757720] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:28.996 [2024-11-20 09:11:07.757727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:28.996 [2024-11-20 09:11:07.757737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.757753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.996 [2024-11-20 09:11:07.757774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.757885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.757895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.757899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.996 [2024-11-20 09:11:07.757909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:28.996 [2024-11-20 09:11:07.757943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.757953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.757961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.996 [2024-11-20 09:11:07.757985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.758063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.758070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.758074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.996 [2024-11-20 09:11:07.758084] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:28.996 [2024-11-20 09:11:07.758090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:28.996 [2024-11-20 09:11:07.758098] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:28.996 [2024-11-20 09:11:07.758114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:28.996 [2024-11-20 09:11:07.758126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.758139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.996 [2024-11-20 09:11:07.758162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.758268] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.996 [2024-11-20 09:11:07.758276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.996 [2024-11-20 09:11:07.758280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758284] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x527d90): datao=0, datal=4096, cccid=0 00:17:28.996 [2024-11-20 09:11:07.758290] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x568600) on tqpair(0x527d90): expected_datao=0, payload_size=4096 00:17:28.996 [2024-11-20 09:11:07.758295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758304] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758309] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.758324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.758328] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.996 [2024-11-20 09:11:07.758342] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:28.996 [2024-11-20 09:11:07.758348] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:28.996 [2024-11-20 09:11:07.758353] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:28.996 [2024-11-20 09:11:07.758359] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:28.996 [2024-11-20 09:11:07.758364] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:28.996 [2024-11-20 09:11:07.758369] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:28.996 [2024-11-20 09:11:07.758384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:28.996 [2024-11-20 09:11:07.758393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.996 [2024-11-20 09:11:07.758402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.996 [2024-11-20 09:11:07.758410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:28.996 [2024-11-20 09:11:07.758433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.996 [2024-11-20 09:11:07.758514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.996 [2024-11-20 09:11:07.758521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.996 [2024-11-20 09:11:07.758525] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758529] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.997 [2024-11-20 09:11:07.758538] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758542] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.758553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.997 [2024-11-20 09:11:07.758560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.758574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.997 [2024-11-20 09:11:07.758581] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.758594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.997 [2024-11-20 09:11:07.758601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.758615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.997 [2024-11-20 09:11:07.758620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:28.997 [2024-11-20 09:11:07.758635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:28.997 [2024-11-20 09:11:07.758643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.758655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.997 [2024-11-20 09:11:07.758678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568600, cid 0, qid 0 00:17:28.997 [2024-11-20 09:11:07.758685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568780, cid 1, qid 0 00:17:28.997 [2024-11-20 09:11:07.758691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568900, cid 2, qid 0 00:17:28.997 [2024-11-20 09:11:07.758696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.997 [2024-11-20 09:11:07.758701] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568c00, cid 4, qid 0 00:17:28.997 [2024-11-20 09:11:07.758823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.997 [2024-11-20 09:11:07.758833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.997 [2024-11-20 09:11:07.758837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568c00) on tqpair=0x527d90 00:17:28.997 [2024-11-20 09:11:07.758847] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:28.997 [2024-11-20 09:11:07.758853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:28.997 [2024-11-20 09:11:07.758866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.758871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.758879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.997 [2024-11-20 09:11:07.758903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568c00, cid 4, qid 0 00:17:28.997 [2024-11-20 09:11:07.759019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.997 [2024-11-20 09:11:07.759026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.997 [2024-11-20 09:11:07.759030] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759034] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x527d90): datao=0, datal=4096, cccid=4 00:17:28.997 [2024-11-20 09:11:07.759039] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x568c00) on tqpair(0x527d90): expected_datao=0, payload_size=4096 00:17:28.997 [2024-11-20 09:11:07.759043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759051] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759056] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.997 [2024-11-20 09:11:07.759071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.997 [2024-11-20 09:11:07.759075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759079] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568c00) on tqpair=0x527d90 00:17:28.997 [2024-11-20 09:11:07.759094] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:28.997 [2024-11-20 09:11:07.759139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759146] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.759154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.997 [2024-11-20 09:11:07.759172] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.759187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:28.997 [2024-11-20 09:11:07.759218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568c00, cid 4, qid 0 00:17:28.997 [2024-11-20 09:11:07.759226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568d80, cid 5, qid 0 00:17:28.997 [2024-11-20 09:11:07.759356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.997 [2024-11-20 09:11:07.759363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.997 [2024-11-20 09:11:07.759367] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759371] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x527d90): datao=0, datal=1024, cccid=4 00:17:28.997 [2024-11-20 09:11:07.759376] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x568c00) on tqpair(0x527d90): expected_datao=0, payload_size=1024 00:17:28.997 [2024-11-20 09:11:07.759381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759388] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759392] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.997 [2024-11-20 09:11:07.759404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.997 [2024-11-20 09:11:07.759408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.759412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568d80) on tqpair=0x527d90 00:17:28.997 [2024-11-20 09:11:07.802851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.997 [2024-11-20 09:11:07.802875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.997 [2024-11-20 09:11:07.802880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.802885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568c00) on tqpair=0x527d90 00:17:28.997 [2024-11-20 09:11:07.802901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.802907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.802916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.997 [2024-11-20 09:11:07.802951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568c00, cid 4, qid 0 00:17:28.997 [2024-11-20 09:11:07.803035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.997 [2024-11-20 09:11:07.803042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.997 [2024-11-20 09:11:07.803045] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803049] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x527d90): datao=0, datal=3072, cccid=4 00:17:28.997 [2024-11-20 09:11:07.803070] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x568c00) on tqpair(0x527d90): expected_datao=0, payload_size=3072 00:17:28.997 [2024-11-20 09:11:07.803075] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803098] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803103] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.997 [2024-11-20 09:11:07.803119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.997 [2024-11-20 09:11:07.803123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568c00) on tqpair=0x527d90 00:17:28.997 [2024-11-20 09:11:07.803138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x527d90) 00:17:28.997 [2024-11-20 09:11:07.803152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.997 [2024-11-20 09:11:07.803181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568c00, cid 4, qid 0 00:17:28.997 [2024-11-20 09:11:07.803259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:28.997 [2024-11-20 09:11:07.803266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:28.997 [2024-11-20 09:11:07.803270] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803274] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x527d90): datao=0, datal=8, cccid=4 00:17:28.997 [2024-11-20 09:11:07.803278] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x568c00) on tqpair(0x527d90): expected_datao=0, payload_size=8 00:17:28.997 [2024-11-20 09:11:07.803283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.997 [2024-11-20 09:11:07.803290] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:28.998 [2024-11-20 09:11:07.803295] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:28.998 [2024-11-20 09:11:07.844888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.998 [2024-11-20 09:11:07.844911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.998 [2024-11-20 09:11:07.844917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.998 [2024-11-20 09:11:07.844922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568c00) on tqpair=0x527d90 00:17:28.998 ===================================================== 00:17:28.998 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:28.998 ===================================================== 00:17:28.998 Controller Capabilities/Features 00:17:28.998 ================================ 00:17:28.998 Vendor ID: 0000 00:17:28.998 Subsystem Vendor ID: 0000 00:17:28.998 Serial Number: .................... 00:17:28.998 Model Number: ........................................ 00:17:28.998 Firmware Version: 25.01 00:17:28.998 Recommended Arb Burst: 0 00:17:28.998 IEEE OUI Identifier: 00 00 00 00:17:28.998 Multi-path I/O 00:17:28.998 May have multiple subsystem ports: No 00:17:28.998 May have multiple controllers: No 00:17:28.998 Associated with SR-IOV VF: No 00:17:28.998 Max Data Transfer Size: 131072 00:17:28.998 Max Number of Namespaces: 0 00:17:28.998 Max Number of I/O Queues: 1024 00:17:28.998 NVMe Specification Version (VS): 1.3 00:17:28.998 NVMe Specification Version (Identify): 1.3 00:17:28.998 Maximum Queue Entries: 128 00:17:28.998 Contiguous Queues Required: Yes 00:17:28.998 Arbitration Mechanisms Supported 00:17:28.998 Weighted Round Robin: Not Supported 00:17:28.998 Vendor Specific: Not Supported 00:17:28.998 Reset Timeout: 15000 ms 00:17:28.998 Doorbell Stride: 4 bytes 00:17:28.998 NVM Subsystem Reset: Not Supported 00:17:28.998 Command Sets Supported 00:17:28.998 NVM Command Set: Supported 00:17:28.998 Boot Partition: Not Supported 00:17:28.998 Memory Page Size Minimum: 4096 bytes 00:17:28.998 Memory Page Size Maximum: 4096 bytes 00:17:28.998 Persistent Memory Region: Not Supported 00:17:28.998 Optional Asynchronous Events Supported 00:17:28.998 Namespace Attribute Notices: Not Supported 00:17:28.998 Firmware Activation Notices: Not Supported 00:17:28.998 ANA Change Notices: Not Supported 00:17:28.998 PLE Aggregate Log Change Notices: Not Supported 00:17:28.998 LBA Status Info Alert Notices: Not Supported 00:17:28.998 EGE Aggregate Log Change Notices: Not Supported 00:17:28.998 Normal NVM Subsystem Shutdown event: Not Supported 00:17:28.998 Zone Descriptor Change Notices: Not Supported 00:17:28.998 Discovery Log Change Notices: Supported 00:17:28.998 Controller Attributes 00:17:28.998 128-bit Host Identifier: Not Supported 00:17:28.998 Non-Operational Permissive Mode: Not Supported 00:17:28.998 NVM Sets: Not Supported 00:17:28.998 Read Recovery Levels: Not Supported 00:17:28.998 Endurance Groups: Not Supported 00:17:28.998 Predictable Latency Mode: Not Supported 00:17:28.998 Traffic Based Keep ALive: Not Supported 00:17:28.998 Namespace Granularity: Not Supported 00:17:28.998 SQ Associations: Not Supported 00:17:28.998 UUID List: Not Supported 00:17:28.998 Multi-Domain Subsystem: Not Supported 00:17:28.998 Fixed Capacity Management: Not Supported 00:17:28.998 Variable Capacity Management: Not Supported 00:17:28.998 Delete Endurance Group: Not Supported 00:17:28.998 Delete NVM Set: Not Supported 00:17:28.998 Extended LBA Formats Supported: Not Supported 00:17:28.998 Flexible Data Placement Supported: Not Supported 00:17:28.998 00:17:28.998 Controller Memory Buffer Support 00:17:28.998 ================================ 00:17:28.998 Supported: No 00:17:28.998 00:17:28.998 Persistent Memory Region Support 00:17:28.998 ================================ 00:17:28.998 Supported: No 00:17:28.998 00:17:28.998 Admin Command Set Attributes 00:17:28.998 ============================ 00:17:28.998 Security Send/Receive: Not Supported 00:17:28.998 Format NVM: Not Supported 00:17:28.998 Firmware Activate/Download: Not Supported 00:17:28.998 Namespace Management: Not Supported 00:17:28.998 Device Self-Test: Not Supported 00:17:28.998 Directives: Not Supported 00:17:28.998 NVMe-MI: Not Supported 00:17:28.998 Virtualization Management: Not Supported 00:17:28.998 Doorbell Buffer Config: Not Supported 00:17:28.998 Get LBA Status Capability: Not Supported 00:17:28.998 Command & Feature Lockdown Capability: Not Supported 00:17:28.998 Abort Command Limit: 1 00:17:28.998 Async Event Request Limit: 4 00:17:28.998 Number of Firmware Slots: N/A 00:17:28.998 Firmware Slot 1 Read-Only: N/A 00:17:28.998 Firmware Activation Without Reset: N/A 00:17:28.998 Multiple Update Detection Support: N/A 00:17:28.998 Firmware Update Granularity: No Information Provided 00:17:28.998 Per-Namespace SMART Log: No 00:17:28.998 Asymmetric Namespace Access Log Page: Not Supported 00:17:28.998 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:28.998 Command Effects Log Page: Not Supported 00:17:28.998 Get Log Page Extended Data: Supported 00:17:28.998 Telemetry Log Pages: Not Supported 00:17:28.998 Persistent Event Log Pages: Not Supported 00:17:28.998 Supported Log Pages Log Page: May Support 00:17:28.998 Commands Supported & Effects Log Page: Not Supported 00:17:28.998 Feature Identifiers & Effects Log Page:May Support 00:17:28.998 NVMe-MI Commands & Effects Log Page: May Support 00:17:28.998 Data Area 4 for Telemetry Log: Not Supported 00:17:28.998 Error Log Page Entries Supported: 128 00:17:28.998 Keep Alive: Not Supported 00:17:28.998 00:17:28.998 NVM Command Set Attributes 00:17:28.998 ========================== 00:17:28.998 Submission Queue Entry Size 00:17:28.998 Max: 1 00:17:28.998 Min: 1 00:17:28.998 Completion Queue Entry Size 00:17:28.998 Max: 1 00:17:28.998 Min: 1 00:17:28.998 Number of Namespaces: 0 00:17:28.998 Compare Command: Not Supported 00:17:28.998 Write Uncorrectable Command: Not Supported 00:17:28.998 Dataset Management Command: Not Supported 00:17:28.998 Write Zeroes Command: Not Supported 00:17:28.998 Set Features Save Field: Not Supported 00:17:28.998 Reservations: Not Supported 00:17:28.998 Timestamp: Not Supported 00:17:28.998 Copy: Not Supported 00:17:28.998 Volatile Write Cache: Not Present 00:17:28.998 Atomic Write Unit (Normal): 1 00:17:28.998 Atomic Write Unit (PFail): 1 00:17:28.998 Atomic Compare & Write Unit: 1 00:17:28.998 Fused Compare & Write: Supported 00:17:28.998 Scatter-Gather List 00:17:28.998 SGL Command Set: Supported 00:17:28.998 SGL Keyed: Supported 00:17:28.998 SGL Bit Bucket Descriptor: Not Supported 00:17:28.998 SGL Metadata Pointer: Not Supported 00:17:28.998 Oversized SGL: Not Supported 00:17:28.998 SGL Metadata Address: Not Supported 00:17:28.998 SGL Offset: Supported 00:17:28.998 Transport SGL Data Block: Not Supported 00:17:28.998 Replay Protected Memory Block: Not Supported 00:17:28.998 00:17:28.998 Firmware Slot Information 00:17:28.998 ========================= 00:17:28.998 Active slot: 0 00:17:28.998 00:17:28.998 00:17:28.998 Error Log 00:17:28.998 ========= 00:17:28.998 00:17:28.998 Active Namespaces 00:17:28.998 ================= 00:17:28.998 Discovery Log Page 00:17:28.998 ================== 00:17:28.998 Generation Counter: 2 00:17:28.998 Number of Records: 2 00:17:28.998 Record Format: 0 00:17:28.998 00:17:28.998 Discovery Log Entry 0 00:17:28.998 ---------------------- 00:17:28.998 Transport Type: 3 (TCP) 00:17:28.998 Address Family: 1 (IPv4) 00:17:28.998 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:28.998 Entry Flags: 00:17:28.998 Duplicate Returned Information: 1 00:17:28.998 Explicit Persistent Connection Support for Discovery: 1 00:17:28.998 Transport Requirements: 00:17:28.998 Secure Channel: Not Required 00:17:28.998 Port ID: 0 (0x0000) 00:17:28.998 Controller ID: 65535 (0xffff) 00:17:28.998 Admin Max SQ Size: 128 00:17:28.998 Transport Service Identifier: 4420 00:17:28.998 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:28.998 Transport Address: 10.0.0.2 00:17:28.998 Discovery Log Entry 1 00:17:28.998 ---------------------- 00:17:28.998 Transport Type: 3 (TCP) 00:17:28.998 Address Family: 1 (IPv4) 00:17:28.998 Subsystem Type: 2 (NVM Subsystem) 00:17:28.998 Entry Flags: 00:17:28.998 Duplicate Returned Information: 0 00:17:28.998 Explicit Persistent Connection Support for Discovery: 0 00:17:28.998 Transport Requirements: 00:17:28.998 Secure Channel: Not Required 00:17:28.998 Port ID: 0 (0x0000) 00:17:28.998 Controller ID: 65535 (0xffff) 00:17:28.998 Admin Max SQ Size: 128 00:17:28.998 Transport Service Identifier: 4420 00:17:28.999 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:28.999 Transport Address: 10.0.0.2 [2024-11-20 09:11:07.845038] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:28.999 [2024-11-20 09:11:07.845054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568600) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.999 [2024-11-20 09:11:07.845068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568780) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.999 [2024-11-20 09:11:07.845078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568900) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.999 [2024-11-20 09:11:07.845088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:28.999 [2024-11-20 09:11:07.845105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.845123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.845151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.845217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.845224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.845228] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.845258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.845284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.845384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.845391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.845395] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845404] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:28.999 [2024-11-20 09:11:07.845409] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:28.999 [2024-11-20 09:11:07.845420] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.845437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.845457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.845529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.845536] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.845540] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.845573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.845593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.845654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.845661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.845664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845688] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.845696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.845716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.845789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.845798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.845802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845823] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.845835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.845858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.845936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.845945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.845949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.845965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.845974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.845982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.846004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.846075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.846082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.846086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.846100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.846117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.846137] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.846207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.846214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.846218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.846232] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.846249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.846269] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.846326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.846333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.846337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.846352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:28.999 [2024-11-20 09:11:07.846369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:28.999 [2024-11-20 09:11:07.846388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:28.999 [2024-11-20 09:11:07.846453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:28.999 [2024-11-20 09:11:07.846460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:28.999 [2024-11-20 09:11:07.846464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:28.999 [2024-11-20 09:11:07.846468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:28.999 [2024-11-20 09:11:07.846479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.846483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.846487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:29.000 [2024-11-20 09:11:07.846495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.000 [2024-11-20 09:11:07.846515] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:29.000 [2024-11-20 09:11:07.846593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.000 [2024-11-20 09:11:07.846600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.000 [2024-11-20 09:11:07.846604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.846609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:29.000 [2024-11-20 09:11:07.846620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.846625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.846629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:29.000 [2024-11-20 09:11:07.846636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.000 [2024-11-20 09:11:07.846657] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:29.000 [2024-11-20 09:11:07.846726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.000 [2024-11-20 09:11:07.846733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.000 [2024-11-20 09:11:07.846736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.846741] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:29.000 [2024-11-20 09:11:07.846751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.849835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.849845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x527d90) 00:17:29.000 [2024-11-20 09:11:07.849854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.000 [2024-11-20 09:11:07.849884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x568a80, cid 3, qid 0 00:17:29.000 [2024-11-20 09:11:07.849973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.000 [2024-11-20 09:11:07.849982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.000 [2024-11-20 09:11:07.849986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.000 [2024-11-20 09:11:07.850001] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x568a80) on tqpair=0x527d90 00:17:29.000 [2024-11-20 09:11:07.850011] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:17:29.000 00:17:29.000 09:11:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:29.000 [2024-11-20 09:11:07.894688] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:29.000 [2024-11-20 09:11:07.894746] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87190 ] 00:17:29.264 [2024-11-20 09:11:08.051238] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:29.264 [2024-11-20 09:11:08.051307] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:29.264 [2024-11-20 09:11:08.051314] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:29.264 [2024-11-20 09:11:08.051325] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:29.264 [2024-11-20 09:11:08.051335] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:29.264 [2024-11-20 09:11:08.051614] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:29.264 [2024-11-20 09:11:08.051683] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d91d90 0 00:17:29.264 [2024-11-20 09:11:08.066858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:29.264 [2024-11-20 09:11:08.066897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:29.264 [2024-11-20 09:11:08.066918] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:29.264 [2024-11-20 09:11:08.066922] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:29.264 [2024-11-20 09:11:08.066952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.264 [2024-11-20 09:11:08.066959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.264 [2024-11-20 09:11:08.066963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.264 [2024-11-20 09:11:08.066975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:29.264 [2024-11-20 09:11:08.067007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.264 [2024-11-20 09:11:08.074850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.265 [2024-11-20 09:11:08.074883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.265 [2024-11-20 09:11:08.074904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.074909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.265 [2024-11-20 09:11:08.074935] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:29.265 [2024-11-20 09:11:08.074943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:29.265 [2024-11-20 09:11:08.074950] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:29.265 [2024-11-20 09:11:08.074966] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.074972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.074976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.265 [2024-11-20 09:11:08.074985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 09:11:08.075016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.265 [2024-11-20 09:11:08.075092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.265 [2024-11-20 09:11:08.075099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.265 [2024-11-20 09:11:08.075103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.265 [2024-11-20 09:11:08.075113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:29.265 [2024-11-20 09:11:08.075121] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:29.265 [2024-11-20 09:11:08.075129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.265 [2024-11-20 09:11:08.075172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 09:11:08.075193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.265 [2024-11-20 09:11:08.075257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.265 [2024-11-20 09:11:08.075264] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.265 [2024-11-20 09:11:08.075268] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.265 [2024-11-20 09:11:08.075278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:29.265 [2024-11-20 09:11:08.075287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:29.265 [2024-11-20 09:11:08.075294] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.265 [2024-11-20 09:11:08.075310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 09:11:08.075329] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.265 [2024-11-20 09:11:08.075407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.265 [2024-11-20 09:11:08.075414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.265 [2024-11-20 09:11:08.075418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.265 [2024-11-20 09:11:08.075428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:29.265 [2024-11-20 09:11:08.075439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.265 [2024-11-20 09:11:08.075455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 09:11:08.075474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.265 [2024-11-20 09:11:08.075540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.265 [2024-11-20 09:11:08.075547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.265 [2024-11-20 09:11:08.075551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.265 [2024-11-20 09:11:08.075560] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:29.265 [2024-11-20 09:11:08.075566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:29.265 [2024-11-20 09:11:08.075575] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:29.265 [2024-11-20 09:11:08.075686] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:29.265 [2024-11-20 09:11:08.075693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:29.265 [2024-11-20 09:11:08.075702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.265 [2024-11-20 09:11:08.075718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 09:11:08.075739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.265 [2024-11-20 09:11:08.075833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.265 [2024-11-20 09:11:08.075842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.265 [2024-11-20 09:11:08.075846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075851] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.265 [2024-11-20 09:11:08.075857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:29.265 [2024-11-20 09:11:08.075868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.265 [2024-11-20 09:11:08.075885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 09:11:08.075907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.265 [2024-11-20 09:11:08.075971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.265 [2024-11-20 09:11:08.075978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.265 [2024-11-20 09:11:08.075982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.075986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.265 [2024-11-20 09:11:08.075992] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:29.265 [2024-11-20 09:11:08.075997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:29.265 [2024-11-20 09:11:08.076006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:29.265 [2024-11-20 09:11:08.076021] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:29.265 [2024-11-20 09:11:08.076033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.265 [2024-11-20 09:11:08.076038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.265 [2024-11-20 09:11:08.076046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.265 [2024-11-20 09:11:08.076067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.265 [2024-11-20 09:11:08.076188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.266 [2024-11-20 09:11:08.076196] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.266 [2024-11-20 09:11:08.076200] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076204] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=4096, cccid=0 00:17:29.266 [2024-11-20 09:11:08.076210] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd2600) on tqpair(0x1d91d90): expected_datao=0, payload_size=4096 00:17:29.266 [2024-11-20 09:11:08.076215] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076223] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076228] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076237] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.266 [2024-11-20 09:11:08.076243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.266 [2024-11-20 09:11:08.076247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.266 [2024-11-20 09:11:08.076260] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:29.266 [2024-11-20 09:11:08.076266] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:29.266 [2024-11-20 09:11:08.076271] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:29.266 [2024-11-20 09:11:08.076276] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:29.266 [2024-11-20 09:11:08.076281] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:29.266 [2024-11-20 09:11:08.076287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076301] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076328] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.266 [2024-11-20 09:11:08.076350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.266 [2024-11-20 09:11:08.076416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.266 [2024-11-20 09:11:08.076424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.266 [2024-11-20 09:11:08.076427] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.266 [2024-11-20 09:11:08.076440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.266 [2024-11-20 09:11:08.076462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.266 [2024-11-20 09:11:08.076483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.266 [2024-11-20 09:11:08.076503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.266 [2024-11-20 09:11:08.076523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076549] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 09:11:08.076579] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2600, cid 0, qid 0 00:17:29.266 [2024-11-20 09:11:08.076586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2780, cid 1, qid 0 00:17:29.266 [2024-11-20 09:11:08.076592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2900, cid 2, qid 0 00:17:29.266 [2024-11-20 09:11:08.076597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.266 [2024-11-20 09:11:08.076602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2c00, cid 4, qid 0 00:17:29.266 [2024-11-20 09:11:08.076703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.266 [2024-11-20 09:11:08.076710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.266 [2024-11-20 09:11:08.076714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2c00) on tqpair=0x1d91d90 00:17:29.266 [2024-11-20 09:11:08.076724] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:29.266 [2024-11-20 09:11:08.076730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.266 [2024-11-20 09:11:08.076813] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2c00, cid 4, qid 0 00:17:29.266 [2024-11-20 09:11:08.076881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.266 [2024-11-20 09:11:08.076888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.266 [2024-11-20 09:11:08.076892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2c00) on tqpair=0x1d91d90 00:17:29.266 [2024-11-20 09:11:08.076963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:29.266 [2024-11-20 09:11:08.076985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.076990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d91d90) 00:17:29.266 [2024-11-20 09:11:08.076998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.266 [2024-11-20 09:11:08.077020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2c00, cid 4, qid 0 00:17:29.266 [2024-11-20 09:11:08.077097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.266 [2024-11-20 09:11:08.077104] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.266 [2024-11-20 09:11:08.077108] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.077112] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=4096, cccid=4 00:17:29.266 [2024-11-20 09:11:08.077117] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd2c00) on tqpair(0x1d91d90): expected_datao=0, payload_size=4096 00:17:29.266 [2024-11-20 09:11:08.077122] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.077130] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.077134] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.077143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.266 [2024-11-20 09:11:08.077149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.266 [2024-11-20 09:11:08.077153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.266 [2024-11-20 09:11:08.077157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2c00) on tqpair=0x1d91d90 00:17:29.266 [2024-11-20 09:11:08.077173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:29.267 [2024-11-20 09:11:08.077187] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d91d90) 00:17:29.267 [2024-11-20 09:11:08.077220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 09:11:08.077242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2c00, cid 4, qid 0 00:17:29.267 [2024-11-20 09:11:08.077333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.267 [2024-11-20 09:11:08.077340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.267 [2024-11-20 09:11:08.077344] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=4096, cccid=4 00:17:29.267 [2024-11-20 09:11:08.077353] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd2c00) on tqpair(0x1d91d90): expected_datao=0, payload_size=4096 00:17:29.267 [2024-11-20 09:11:08.077358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077365] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077370] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.267 [2024-11-20 09:11:08.077384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.267 [2024-11-20 09:11:08.077388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2c00) on tqpair=0x1d91d90 00:17:29.267 [2024-11-20 09:11:08.077410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d91d90) 00:17:29.267 [2024-11-20 09:11:08.077444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 09:11:08.077465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2c00, cid 4, qid 0 00:17:29.267 [2024-11-20 09:11:08.077545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.267 [2024-11-20 09:11:08.077552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.267 [2024-11-20 09:11:08.077556] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077560] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=4096, cccid=4 00:17:29.267 [2024-11-20 09:11:08.077565] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd2c00) on tqpair(0x1d91d90): expected_datao=0, payload_size=4096 00:17:29.267 [2024-11-20 09:11:08.077570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077577] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077581] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.267 [2024-11-20 09:11:08.077596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.267 [2024-11-20 09:11:08.077599] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2c00) on tqpair=0x1d91d90 00:17:29.267 [2024-11-20 09:11:08.077613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077659] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:29.267 [2024-11-20 09:11:08.077665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:29.267 [2024-11-20 09:11:08.077671] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:29.267 [2024-11-20 09:11:08.077686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077692] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d91d90) 00:17:29.267 [2024-11-20 09:11:08.077700] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 09:11:08.077707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d91d90) 00:17:29.267 [2024-11-20 09:11:08.077722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:29.267 [2024-11-20 09:11:08.077749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2c00, cid 4, qid 0 00:17:29.267 [2024-11-20 09:11:08.077772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2d80, cid 5, qid 0 00:17:29.267 [2024-11-20 09:11:08.077855] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.267 [2024-11-20 09:11:08.077863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.267 [2024-11-20 09:11:08.077867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2c00) on tqpair=0x1d91d90 00:17:29.267 [2024-11-20 09:11:08.077879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.267 [2024-11-20 09:11:08.077885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.267 [2024-11-20 09:11:08.077889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2d80) on tqpair=0x1d91d90 00:17:29.267 [2024-11-20 09:11:08.077903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.077908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d91d90) 00:17:29.267 [2024-11-20 09:11:08.077939] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 09:11:08.077964] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2d80, cid 5, qid 0 00:17:29.267 [2024-11-20 09:11:08.078037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.267 [2024-11-20 09:11:08.078044] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.267 [2024-11-20 09:11:08.078048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.078052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2d80) on tqpair=0x1d91d90 00:17:29.267 [2024-11-20 09:11:08.078063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.078068] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d91d90) 00:17:29.267 [2024-11-20 09:11:08.078075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.267 [2024-11-20 09:11:08.078094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2d80, cid 5, qid 0 00:17:29.267 [2024-11-20 09:11:08.078162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.267 [2024-11-20 09:11:08.078175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.267 [2024-11-20 09:11:08.078180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.078184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2d80) on tqpair=0x1d91d90 00:17:29.267 [2024-11-20 09:11:08.078195] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.267 [2024-11-20 09:11:08.078200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d91d90) 00:17:29.267 [2024-11-20 09:11:08.078207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 09:11:08.078227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2d80, cid 5, qid 0 00:17:29.268 [2024-11-20 09:11:08.078286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.268 [2024-11-20 09:11:08.078293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.268 [2024-11-20 09:11:08.078297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2d80) on tqpair=0x1d91d90 00:17:29.268 [2024-11-20 09:11:08.078321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d91d90) 00:17:29.268 [2024-11-20 09:11:08.078335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 09:11:08.078343] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d91d90) 00:17:29.268 [2024-11-20 09:11:08.078354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 09:11:08.078362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d91d90) 00:17:29.268 [2024-11-20 09:11:08.078373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 09:11:08.078381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d91d90) 00:17:29.268 [2024-11-20 09:11:08.078392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.268 [2024-11-20 09:11:08.078414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2d80, cid 5, qid 0 00:17:29.268 [2024-11-20 09:11:08.078421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2c00, cid 4, qid 0 00:17:29.268 [2024-11-20 09:11:08.078426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2f00, cid 6, qid 0 00:17:29.268 [2024-11-20 09:11:08.078432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd3080, cid 7, qid 0 00:17:29.268 [2024-11-20 09:11:08.078582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.268 [2024-11-20 09:11:08.078589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.268 [2024-11-20 09:11:08.078593] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078597] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=8192, cccid=5 00:17:29.268 [2024-11-20 09:11:08.078610] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd2d80) on tqpair(0x1d91d90): expected_datao=0, payload_size=8192 00:17:29.268 [2024-11-20 09:11:08.078615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078632] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078638] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.268 [2024-11-20 09:11:08.078650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.268 [2024-11-20 09:11:08.078654] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078658] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=512, cccid=4 00:17:29.268 [2024-11-20 09:11:08.078663] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd2c00) on tqpair(0x1d91d90): expected_datao=0, payload_size=512 00:17:29.268 [2024-11-20 09:11:08.078667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078675] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078679] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.268 [2024-11-20 09:11:08.078691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.268 [2024-11-20 09:11:08.078695] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078699] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=512, cccid=6 00:17:29.268 [2024-11-20 09:11:08.078704] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd2f00) on tqpair(0x1d91d90): expected_datao=0, payload_size=512 00:17:29.268 [2024-11-20 09:11:08.078708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078719] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:29.268 [2024-11-20 09:11:08.078731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:29.268 [2024-11-20 09:11:08.078734] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.078738] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d91d90): datao=0, datal=4096, cccid=7 00:17:29.268 [2024-11-20 09:11:08.078743] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1dd3080) on tqpair(0x1d91d90): expected_datao=0, payload_size=4096 00:17:29.268 [2024-11-20 09:11:08.078748] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.082821] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.082870] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.082899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.268 [2024-11-20 09:11:08.082906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.268 [2024-11-20 09:11:08.082910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.082914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2d80) on tqpair=0x1d91d90 00:17:29.268 [2024-11-20 09:11:08.082934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.268 [2024-11-20 09:11:08.082941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.268 [2024-11-20 09:11:08.082945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.082949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2c00) on tqpair=0x1d91d90 00:17:29.268 [2024-11-20 09:11:08.082962] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.268 [2024-11-20 09:11:08.082969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.268 [2024-11-20 09:11:08.082973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.268 [2024-11-20 09:11:08.082977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2f00) on tqpair=0x1d91d90 00:17:29.268 ===================================================== 00:17:29.268 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:29.268 ===================================================== 00:17:29.268 Controller Capabilities/Features 00:17:29.268 ================================ 00:17:29.268 Vendor ID: 8086 00:17:29.268 Subsystem Vendor ID: 8086 00:17:29.268 Serial Number: SPDK00000000000001 00:17:29.268 Model Number: SPDK bdev Controller 00:17:29.268 Firmware Version: 25.01 00:17:29.268 Recommended Arb Burst: 6 00:17:29.268 IEEE OUI Identifier: e4 d2 5c 00:17:29.268 Multi-path I/O 00:17:29.268 May have multiple subsystem ports: Yes 00:17:29.268 May have multiple controllers: Yes 00:17:29.268 Associated with SR-IOV VF: No 00:17:29.268 Max Data Transfer Size: 131072 00:17:29.268 Max Number of Namespaces: 32 00:17:29.268 Max Number of I/O Queues: 127 00:17:29.268 NVMe Specification Version (VS): 1.3 00:17:29.268 NVMe Specification Version (Identify): 1.3 00:17:29.268 Maximum Queue Entries: 128 00:17:29.268 Contiguous Queues Required: Yes 00:17:29.268 Arbitration Mechanisms Supported 00:17:29.268 Weighted Round Robin: Not Supported 00:17:29.268 Vendor Specific: Not Supported 00:17:29.268 Reset Timeout: 15000 ms 00:17:29.268 Doorbell Stride: 4 bytes 00:17:29.268 NVM Subsystem Reset: Not Supported 00:17:29.268 Command Sets Supported 00:17:29.268 NVM Command Set: Supported 00:17:29.268 Boot Partition: Not Supported 00:17:29.268 Memory Page Size Minimum: 4096 bytes 00:17:29.268 Memory Page Size Maximum: 4096 bytes 00:17:29.268 Persistent Memory Region: Not Supported 00:17:29.268 Optional Asynchronous Events Supported 00:17:29.268 Namespace Attribute Notices: Supported 00:17:29.268 Firmware Activation Notices: Not Supported 00:17:29.268 ANA Change Notices: Not Supported 00:17:29.268 PLE Aggregate Log Change Notices: Not Supported 00:17:29.268 LBA Status Info Alert Notices: Not Supported 00:17:29.268 EGE Aggregate Log Change Notices: Not Supported 00:17:29.268 Normal NVM Subsystem Shutdown event: Not Supported 00:17:29.268 Zone Descriptor Change Notices: Not Supported 00:17:29.268 Discovery Log Change Notices: Not Supported 00:17:29.268 Controller Attributes 00:17:29.268 128-bit Host Identifier: Supported 00:17:29.268 Non-Operational Permissive Mode: Not Supported 00:17:29.268 NVM Sets: Not Supported 00:17:29.268 Read Recovery Levels: Not Supported 00:17:29.268 Endurance Groups: Not Supported 00:17:29.268 Predictable Latency Mode: Not Supported 00:17:29.268 Traffic Based Keep ALive: Not Supported 00:17:29.268 Namespace Granularity: Not Supported 00:17:29.268 SQ Associations: Not Supported 00:17:29.268 UUID List: Not Supported 00:17:29.268 Multi-Domain Subsystem: Not Supported 00:17:29.269 Fixed Capacity Management: Not Supported 00:17:29.269 Variable Capacity Management: Not Supported 00:17:29.269 Delete Endurance Group: Not Supported 00:17:29.269 Delete NVM Set: Not Supported 00:17:29.269 Extended LBA Formats Supported: Not Supported 00:17:29.269 Flexible Data Placement Supported: Not Supported 00:17:29.269 00:17:29.269 Controller Memory Buffer Support 00:17:29.269 ================================ 00:17:29.269 Supported: No 00:17:29.269 00:17:29.269 Persistent Memory Region Support 00:17:29.269 ================================ 00:17:29.269 Supported: No 00:17:29.269 00:17:29.269 Admin Command Set Attributes 00:17:29.269 ============================ 00:17:29.269 Security Send/Receive: Not Supported 00:17:29.269 Format NVM: Not Supported 00:17:29.269 Firmware Activate/Download: Not Supported 00:17:29.269 Namespace Management: Not Supported 00:17:29.269 Device Self-Test: Not Supported 00:17:29.269 Directives: Not Supported 00:17:29.269 NVMe-MI: Not Supported 00:17:29.269 Virtualization Management: Not Supported 00:17:29.269 Doorbell Buffer Config: Not Supported 00:17:29.269 Get LBA Status Capability: Not Supported 00:17:29.269 Command & Feature Lockdown Capability: Not Supported 00:17:29.269 Abort Command Limit: 4 00:17:29.269 Async Event Request Limit: 4 00:17:29.269 Number of Firmware Slots: N/A 00:17:29.269 Firmware Slot 1 Read-Only: N/A 00:17:29.269 Firmware Activation Without Reset: N/A 00:17:29.269 Multiple Update Detection Support: N/A 00:17:29.269 Firmware Update Granularity: No Information Provided 00:17:29.269 Per-Namespace SMART Log: No 00:17:29.269 Asymmetric Namespace Access Log Page: Not Supported 00:17:29.269 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:29.269 Command Effects Log Page: Supported 00:17:29.269 Get Log Page Extended Data: Supported 00:17:29.269 Telemetry Log Pages: Not Supported 00:17:29.269 Persistent Event Log Pages: Not Supported 00:17:29.269 Supported Log Pages Log Page: May Support 00:17:29.269 Commands Supported & Effects Log Page: Not Supported 00:17:29.269 Feature Identifiers & Effects Log Page:May Support 00:17:29.269 NVMe-MI Commands & Effects Log Page: May Support 00:17:29.269 Data Area 4 for Telemetry Log: Not Supported 00:17:29.269 Error Log Page Entries Supported: 128 00:17:29.269 Keep Alive: Supported 00:17:29.269 Keep Alive Granularity: 10000 ms 00:17:29.269 00:17:29.269 NVM Command Set Attributes 00:17:29.269 ========================== 00:17:29.269 Submission Queue Entry Size 00:17:29.269 Max: 64 00:17:29.269 Min: 64 00:17:29.269 Completion Queue Entry Size 00:17:29.269 Max: 16 00:17:29.269 Min: 16 00:17:29.269 Number of Namespaces: 32 00:17:29.269 Compare Command: Supported 00:17:29.269 Write Uncorrectable Command: Not Supported 00:17:29.269 Dataset Management Command: Supported 00:17:29.269 Write Zeroes Command: Supported 00:17:29.269 Set Features Save Field: Not Supported 00:17:29.269 Reservations: Supported 00:17:29.269 Timestamp: Not Supported 00:17:29.269 Copy: Supported 00:17:29.269 Volatile Write Cache: Present 00:17:29.269 Atomic Write Unit (Normal): 1 00:17:29.269 Atomic Write Unit (PFail): 1 00:17:29.269 Atomic Compare & Write Unit: 1 00:17:29.269 Fused Compare & Write: Supported 00:17:29.269 Scatter-Gather List 00:17:29.269 SGL Command Set: Supported 00:17:29.269 SGL Keyed: Supported 00:17:29.269 SGL Bit Bucket Descriptor: Not Supported 00:17:29.269 SGL Metadata Pointer: Not Supported 00:17:29.269 Oversized SGL: Not Supported 00:17:29.269 SGL Metadata Address: Not Supported 00:17:29.269 SGL Offset: Supported 00:17:29.269 Transport SGL Data Block: Not Supported 00:17:29.269 Replay Protected Memory Block: Not Supported 00:17:29.269 00:17:29.269 Firmware Slot Information 00:17:29.269 ========================= 00:17:29.269 Active slot: 1 00:17:29.269 Slot 1 Firmware Revision: 25.01 00:17:29.269 00:17:29.269 00:17:29.269 Commands Supported and Effects 00:17:29.269 ============================== 00:17:29.269 Admin Commands 00:17:29.269 -------------- 00:17:29.269 Get Log Page (02h): Supported 00:17:29.269 Identify (06h): Supported 00:17:29.269 Abort (08h): Supported 00:17:29.269 Set Features (09h): Supported 00:17:29.269 Get Features (0Ah): Supported 00:17:29.269 Asynchronous Event Request (0Ch): Supported 00:17:29.269 Keep Alive (18h): Supported 00:17:29.269 I/O Commands 00:17:29.269 ------------ 00:17:29.269 Flush (00h): Supported LBA-Change 00:17:29.269 Write (01h): Supported LBA-Change 00:17:29.269 Read (02h): Supported 00:17:29.269 Compare (05h): Supported 00:17:29.269 Write Zeroes (08h): Supported LBA-Change 00:17:29.269 Dataset Management (09h): Supported LBA-Change 00:17:29.269 Copy (19h): Supported LBA-Change 00:17:29.269 00:17:29.269 Error Log 00:17:29.269 ========= 00:17:29.269 00:17:29.269 Arbitration 00:17:29.269 =========== 00:17:29.269 Arbitration Burst: 1 00:17:29.269 00:17:29.269 Power Management 00:17:29.269 ================ 00:17:29.269 Number of Power States: 1 00:17:29.269 Current Power State: Power State #0 00:17:29.269 Power State #0: 00:17:29.269 Max Power: 0.00 W 00:17:29.269 Non-Operational State: Operational 00:17:29.269 Entry Latency: Not Reported 00:17:29.269 Exit Latency: Not Reported 00:17:29.269 Relative Read Throughput: 0 00:17:29.269 Relative Read Latency: 0 00:17:29.269 Relative Write Throughput: 0 00:17:29.269 Relative Write Latency: 0 00:17:29.269 Idle Power: Not Reported 00:17:29.269 Active Power: Not Reported 00:17:29.269 Non-Operational Permissive Mode: Not Supported 00:17:29.269 00:17:29.269 Health Information 00:17:29.269 ================== 00:17:29.269 Critical Warnings: 00:17:29.269 Available Spare Space: OK 00:17:29.269 Temperature: OK 00:17:29.269 Device Reliability: OK 00:17:29.269 Read Only: No 00:17:29.269 Volatile Memory Backup: OK 00:17:29.269 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:29.269 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:29.269 Available Spare: 0% 00:17:29.269 Available Spare Threshold: 0% 00:17:29.269 Life Percentage Used:[2024-11-20 09:11:08.082984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.269 [2024-11-20 09:11:08.082991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.269 [2024-11-20 09:11:08.082995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.269 [2024-11-20 09:11:08.082999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd3080) on tqpair=0x1d91d90 00:17:29.269 [2024-11-20 09:11:08.083106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.269 [2024-11-20 09:11:08.083114] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d91d90) 00:17:29.269 [2024-11-20 09:11:08.083124] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.269 [2024-11-20 09:11:08.083154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd3080, cid 7, qid 0 00:17:29.269 [2024-11-20 09:11:08.083238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.269 [2024-11-20 09:11:08.083245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.269 [2024-11-20 09:11:08.083249] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.269 [2024-11-20 09:11:08.083254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd3080) on tqpair=0x1d91d90 00:17:29.269 [2024-11-20 09:11:08.083294] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:29.269 [2024-11-20 09:11:08.083307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2600) on tqpair=0x1d91d90 00:17:29.269 [2024-11-20 09:11:08.083315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 09:11:08.083321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2780) on tqpair=0x1d91d90 00:17:29.269 [2024-11-20 09:11:08.083326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 09:11:08.083331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2900) on tqpair=0x1d91d90 00:17:29.269 [2024-11-20 09:11:08.083336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.269 [2024-11-20 09:11:08.083341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.083346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:29.270 [2024-11-20 09:11:08.083356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.270 [2024-11-20 09:11:08.083373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 09:11:08.083398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.270 [2024-11-20 09:11:08.083457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.270 [2024-11-20 09:11:08.083464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.270 [2024-11-20 09:11:08.083468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.083480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.270 [2024-11-20 09:11:08.083497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 09:11:08.083519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.270 [2024-11-20 09:11:08.083604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.270 [2024-11-20 09:11:08.083611] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.270 [2024-11-20 09:11:08.083615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.083624] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:29.270 [2024-11-20 09:11:08.083630] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:29.270 [2024-11-20 09:11:08.083640] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.270 [2024-11-20 09:11:08.083656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 09:11:08.083674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.270 [2024-11-20 09:11:08.083730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.270 [2024-11-20 09:11:08.083737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.270 [2024-11-20 09:11:08.083741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.083756] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083761] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.270 [2024-11-20 09:11:08.083787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 09:11:08.083810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.270 [2024-11-20 09:11:08.083888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.270 [2024-11-20 09:11:08.083895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.270 [2024-11-20 09:11:08.083899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.083914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.083923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.270 [2024-11-20 09:11:08.083931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 09:11:08.083949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.270 [2024-11-20 09:11:08.084016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.270 [2024-11-20 09:11:08.084032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.270 [2024-11-20 09:11:08.084037] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084042] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.084053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.270 [2024-11-20 09:11:08.084070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 09:11:08.084089] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.270 [2024-11-20 09:11:08.084150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.270 [2024-11-20 09:11:08.084157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.270 [2024-11-20 09:11:08.084161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084165] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.084176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.270 [2024-11-20 09:11:08.084192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.270 [2024-11-20 09:11:08.084210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.270 [2024-11-20 09:11:08.084273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.270 [2024-11-20 09:11:08.084280] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.270 [2024-11-20 09:11:08.084284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.270 [2024-11-20 09:11:08.084299] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.270 [2024-11-20 09:11:08.084304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.084315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.084333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.084402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.084419] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.084424] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.084439] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.084456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.084475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.084545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.084552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.084556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.084570] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.084588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.084606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.084667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.084679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.084683] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.084699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.084716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.084734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.084805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.084813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.084817] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.084833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.084850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.084870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.084944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.084951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.084955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.084970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.084979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.084986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.085004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.085074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.085081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.085085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.085100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.085116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.085133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.085197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.085204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.085208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085212] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.085222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.085239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.085256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.085327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.085334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.085338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.085353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.085369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.085386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.085452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.085459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.085463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.085478] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.271 [2024-11-20 09:11:08.085494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.271 [2024-11-20 09:11:08.085512] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.271 [2024-11-20 09:11:08.085572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.271 [2024-11-20 09:11:08.085583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.271 [2024-11-20 09:11:08.085588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.271 [2024-11-20 09:11:08.085604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.271 [2024-11-20 09:11:08.085609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.085613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.085621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.085639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.085700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.085707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.085711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.085715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.085725] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.085730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.085734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.085742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.085781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.085847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.085854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.085858] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.085862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.085873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.085879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.085883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.085890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.085911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.086001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.086009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.086013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.086028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.086044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.086064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.086136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.086143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.086147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.086162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.086178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.086196] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.086263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.086270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.086274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.086288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.086314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.086332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.086401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.086407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.086411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.086426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086435] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.086442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.086460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.086527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.086534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.086537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.086552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.086568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.086586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.086663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.086670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.086674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.086689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.086698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.272 [2024-11-20 09:11:08.086705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.272 [2024-11-20 09:11:08.086723] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.272 [2024-11-20 09:11:08.090810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.272 [2024-11-20 09:11:08.090827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.272 [2024-11-20 09:11:08.090832] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.272 [2024-11-20 09:11:08.090837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.272 [2024-11-20 09:11:08.090850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:29.273 [2024-11-20 09:11:08.090856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:29.273 [2024-11-20 09:11:08.090860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d91d90) 00:17:29.273 [2024-11-20 09:11:08.090869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.273 [2024-11-20 09:11:08.090895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1dd2a80, cid 3, qid 0 00:17:29.273 [2024-11-20 09:11:08.090963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:29.273 [2024-11-20 09:11:08.090970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:29.273 [2024-11-20 09:11:08.090974] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:29.273 [2024-11-20 09:11:08.090978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1dd2a80) on tqpair=0x1d91d90 00:17:29.273 [2024-11-20 09:11:08.090987] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:29.273 0% 00:17:29.273 Data Units Read: 0 00:17:29.273 Data Units Written: 0 00:17:29.273 Host Read Commands: 0 00:17:29.273 Host Write Commands: 0 00:17:29.273 Controller Busy Time: 0 minutes 00:17:29.273 Power Cycles: 0 00:17:29.273 Power On Hours: 0 hours 00:17:29.273 Unsafe Shutdowns: 0 00:17:29.273 Unrecoverable Media Errors: 0 00:17:29.273 Lifetime Error Log Entries: 0 00:17:29.273 Warning Temperature Time: 0 minutes 00:17:29.273 Critical Temperature Time: 0 minutes 00:17:29.273 00:17:29.273 Number of Queues 00:17:29.273 ================ 00:17:29.273 Number of I/O Submission Queues: 127 00:17:29.273 Number of I/O Completion Queues: 127 00:17:29.273 00:17:29.273 Active Namespaces 00:17:29.273 ================= 00:17:29.273 Namespace ID:1 00:17:29.273 Error Recovery Timeout: Unlimited 00:17:29.273 Command Set Identifier: NVM (00h) 00:17:29.273 Deallocate: Supported 00:17:29.273 Deallocated/Unwritten Error: Not Supported 00:17:29.273 Deallocated Read Value: Unknown 00:17:29.273 Deallocate in Write Zeroes: Not Supported 00:17:29.273 Deallocated Guard Field: 0xFFFF 00:17:29.273 Flush: Supported 00:17:29.273 Reservation: Supported 00:17:29.273 Namespace Sharing Capabilities: Multiple Controllers 00:17:29.273 Size (in LBAs): 131072 (0GiB) 00:17:29.273 Capacity (in LBAs): 131072 (0GiB) 00:17:29.273 Utilization (in LBAs): 131072 (0GiB) 00:17:29.273 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:29.273 EUI64: ABCDEF0123456789 00:17:29.273 UUID: 8cf7c71b-f318-42f5-8a16-1b07794f38ae 00:17:29.273 Thin Provisioning: Not Supported 00:17:29.273 Per-NS Atomic Units: Yes 00:17:29.273 Atomic Boundary Size (Normal): 0 00:17:29.273 Atomic Boundary Size (PFail): 0 00:17:29.273 Atomic Boundary Offset: 0 00:17:29.273 Maximum Single Source Range Length: 65535 00:17:29.273 Maximum Copy Length: 65535 00:17:29.273 Maximum Source Range Count: 1 00:17:29.273 NGUID/EUI64 Never Reused: No 00:17:29.273 Namespace Write Protected: No 00:17:29.273 Number of LBA Formats: 1 00:17:29.273 Current LBA Format: LBA Format #00 00:17:29.273 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:29.273 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:29.273 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:29.273 rmmod nvme_tcp 00:17:29.533 rmmod nvme_fabrics 00:17:29.533 rmmod nvme_keyring 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 87134 ']' 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 87134 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 87134 ']' 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 87134 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87134 00:17:29.533 killing process with pid 87134 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87134' 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 87134 00:17:29.533 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 87134 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # continue 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:17:29.792 00:17:29.792 real 0m3.069s 00:17:29.792 user 0m7.917s 00:17:29.792 sys 0m0.861s 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:29.792 ************************************ 00:17:29.792 END TEST nvmf_identify 00:17:29.792 ************************************ 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.792 09:11:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:30.052 ************************************ 00:17:30.052 START TEST nvmf_perf 00:17:30.052 ************************************ 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:30.052 * Looking for test storage... 00:17:30.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:30.052 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:30.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.053 --rc genhtml_branch_coverage=1 00:17:30.053 --rc genhtml_function_coverage=1 00:17:30.053 --rc genhtml_legend=1 00:17:30.053 --rc geninfo_all_blocks=1 00:17:30.053 --rc geninfo_unexecuted_blocks=1 00:17:30.053 00:17:30.053 ' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:30.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.053 --rc genhtml_branch_coverage=1 00:17:30.053 --rc genhtml_function_coverage=1 00:17:30.053 --rc genhtml_legend=1 00:17:30.053 --rc geninfo_all_blocks=1 00:17:30.053 --rc geninfo_unexecuted_blocks=1 00:17:30.053 00:17:30.053 ' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:30.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.053 --rc genhtml_branch_coverage=1 00:17:30.053 --rc genhtml_function_coverage=1 00:17:30.053 --rc genhtml_legend=1 00:17:30.053 --rc geninfo_all_blocks=1 00:17:30.053 --rc geninfo_unexecuted_blocks=1 00:17:30.053 00:17:30.053 ' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:30.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:30.053 --rc genhtml_branch_coverage=1 00:17:30.053 --rc genhtml_function_coverage=1 00:17:30.053 --rc genhtml_legend=1 00:17:30.053 --rc geninfo_all_blocks=1 00:17:30.053 --rc geninfo_unexecuted_blocks=1 00:17:30.053 00:17:30.053 ' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:30.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@223 -- # create_target_ns 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:30.053 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:30.054 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target0 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:30.314 09:11:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:30.314 10.0.0.1 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:30.314 10.0.0.2 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:30.314 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@151 -- # set_up target1 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772163 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:30.315 10.0.0.3 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772164 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:30.315 10.0.0.4 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:30.315 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:30.316 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:30.576 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:30.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:17:30.577 00:17:30.577 --- 10.0.0.1 ping statistics --- 00:17:30.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.577 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:30.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:17:30.577 00:17:30.577 --- 10.0.0.2 ping statistics --- 00:17:30.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.577 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:30.577 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.577 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.132 ms 00:17:30.577 00:17:30.577 --- 10.0.0.3 ping statistics --- 00:17:30.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.577 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:30.577 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:30.577 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:17:30.577 00:17:30.577 --- 10.0.0.4 ping statistics --- 00:17:30.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.577 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # return 0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:30.577 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target0 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target0 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo target1 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=target1 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=87413 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 87413 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 87413 ']' 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.578 09:11:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:30.847 [2024-11-20 09:11:09.495297] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:30.847 [2024-11-20 09:11:09.495413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.847 [2024-11-20 09:11:09.647158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.847 [2024-11-20 09:11:09.705641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.847 [2024-11-20 09:11:09.705732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.847 [2024-11-20 09:11:09.705745] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.847 [2024-11-20 09:11:09.705753] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.847 [2024-11-20 09:11:09.705772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.847 [2024-11-20 09:11:09.706981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.847 [2024-11-20 09:11:09.707110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.847 [2024-11-20 09:11:09.708273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:30.847 [2024-11-20 09:11:09.708307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:31.782 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:32.349 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:32.349 09:11:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:32.608 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:32.608 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:32.866 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:32.866 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:32.866 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:32.866 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:32.866 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.124 [2024-11-20 09:11:11.874400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.124 09:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:33.382 09:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:33.382 09:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:33.641 09:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:33.641 09:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:33.900 09:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:34.157 [2024-11-20 09:11:12.968455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:34.157 09:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:34.413 09:11:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:34.413 09:11:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:34.413 09:11:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:34.413 09:11:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:35.787 Initializing NVMe Controllers 00:17:35.787 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:35.787 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:35.787 Initialization complete. Launching workers. 00:17:35.787 ======================================================== 00:17:35.787 Latency(us) 00:17:35.787 Device Information : IOPS MiB/s Average min max 00:17:35.787 PCIE (0000:00:10.0) NSID 1 from core 0: 22702.99 88.68 1409.72 401.42 8585.49 00:17:35.787 ======================================================== 00:17:35.787 Total : 22702.99 88.68 1409.72 401.42 8585.49 00:17:35.787 00:17:35.787 09:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:37.161 Initializing NVMe Controllers 00:17:37.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:37.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:37.161 Initialization complete. Launching workers. 00:17:37.161 ======================================================== 00:17:37.161 Latency(us) 00:17:37.161 Device Information : IOPS MiB/s Average min max 00:17:37.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3265.89 12.76 305.92 104.84 7237.18 00:17:37.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8063.39 5911.81 13972.42 00:17:37.161 ======================================================== 00:17:37.161 Total : 3390.88 13.25 591.87 104.84 13972.42 00:17:37.161 00:17:37.161 09:11:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:38.536 Initializing NVMe Controllers 00:17:38.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:38.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:38.536 Initialization complete. Launching workers. 00:17:38.536 ======================================================== 00:17:38.536 Latency(us) 00:17:38.536 Device Information : IOPS MiB/s Average min max 00:17:38.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8809.91 34.41 3633.18 634.10 10396.25 00:17:38.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2703.44 10.56 11946.26 4916.75 21253.58 00:17:38.536 ======================================================== 00:17:38.536 Total : 11513.35 44.97 5585.17 634.10 21253.58 00:17:38.536 00:17:38.536 09:11:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:38.536 09:11:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:41.083 Initializing NVMe Controllers 00:17:41.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.083 Controller IO queue size 128, less than required. 00:17:41.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.083 Controller IO queue size 128, less than required. 00:17:41.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:41.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:41.083 Initialization complete. Launching workers. 00:17:41.083 ======================================================== 00:17:41.083 Latency(us) 00:17:41.083 Device Information : IOPS MiB/s Average min max 00:17:41.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1268.58 317.14 103229.73 66441.97 181402.67 00:17:41.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 557.31 139.33 236785.13 81418.67 353698.13 00:17:41.083 ======================================================== 00:17:41.083 Total : 1825.89 456.47 143994.66 66441.97 353698.13 00:17:41.083 00:17:41.083 09:11:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:41.083 Initializing NVMe Controllers 00:17:41.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.083 Controller IO queue size 128, less than required. 00:17:41.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.083 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:41.083 Controller IO queue size 128, less than required. 00:17:41.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:41.083 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:41.083 WARNING: Some requested NVMe devices were skipped 00:17:41.083 No valid NVMe controllers or AIO or URING devices found 00:17:41.083 09:11:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:43.618 Initializing NVMe Controllers 00:17:43.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:43.618 Controller IO queue size 128, less than required. 00:17:43.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:43.618 Controller IO queue size 128, less than required. 00:17:43.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:43.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:43.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:43.618 Initialization complete. Launching workers. 00:17:43.618 00:17:43.618 ==================== 00:17:43.618 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:43.618 TCP transport: 00:17:43.618 polls: 8164 00:17:43.618 idle_polls: 4768 00:17:43.618 sock_completions: 3396 00:17:43.618 nvme_completions: 4083 00:17:43.618 submitted_requests: 6060 00:17:43.618 queued_requests: 1 00:17:43.618 00:17:43.618 ==================== 00:17:43.618 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:43.618 TCP transport: 00:17:43.618 polls: 8156 00:17:43.618 idle_polls: 5059 00:17:43.618 sock_completions: 3097 00:17:43.618 nvme_completions: 6039 00:17:43.618 submitted_requests: 9018 00:17:43.618 queued_requests: 1 00:17:43.618 ======================================================== 00:17:43.618 Latency(us) 00:17:43.619 Device Information : IOPS MiB/s Average min max 00:17:43.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1020.40 255.10 129248.18 85353.53 181323.99 00:17:43.619 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1509.36 377.34 85310.16 47945.35 133494.34 00:17:43.619 ======================================================== 00:17:43.619 Total : 2529.76 632.44 103032.99 47945.35 181323.99 00:17:43.619 00:17:43.619 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:43.619 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:44.185 rmmod nvme_tcp 00:17:44.185 rmmod nvme_fabrics 00:17:44.185 rmmod nvme_keyring 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:17:44.185 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 87413 ']' 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 87413 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 87413 ']' 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 87413 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87413 00:17:44.186 killing process with pid 87413 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87413' 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 87413 00:17:44.186 09:11:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 87413 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # continue 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:17:44.752 00:17:44.752 real 0m14.936s 00:17:44.752 user 0m54.296s 00:17:44.752 sys 0m3.620s 00:17:44.752 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.753 09:11:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:44.753 ************************************ 00:17:44.753 END TEST nvmf_perf 00:17:44.753 ************************************ 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.012 ************************************ 00:17:45.012 START TEST nvmf_fio_host 00:17:45.012 ************************************ 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:45.012 * Looking for test storage... 00:17:45.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:45.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.012 --rc genhtml_branch_coverage=1 00:17:45.012 --rc genhtml_function_coverage=1 00:17:45.012 --rc genhtml_legend=1 00:17:45.012 --rc geninfo_all_blocks=1 00:17:45.012 --rc geninfo_unexecuted_blocks=1 00:17:45.012 00:17:45.012 ' 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:45.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.012 --rc genhtml_branch_coverage=1 00:17:45.012 --rc genhtml_function_coverage=1 00:17:45.012 --rc genhtml_legend=1 00:17:45.012 --rc geninfo_all_blocks=1 00:17:45.012 --rc geninfo_unexecuted_blocks=1 00:17:45.012 00:17:45.012 ' 00:17:45.012 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:45.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.012 --rc genhtml_branch_coverage=1 00:17:45.013 --rc genhtml_function_coverage=1 00:17:45.013 --rc genhtml_legend=1 00:17:45.013 --rc geninfo_all_blocks=1 00:17:45.013 --rc geninfo_unexecuted_blocks=1 00:17:45.013 00:17:45.013 ' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:45.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.013 --rc genhtml_branch_coverage=1 00:17:45.013 --rc genhtml_function_coverage=1 00:17:45.013 --rc genhtml_legend=1 00:17:45.013 --rc geninfo_all_blocks=1 00:17:45.013 --rc geninfo_unexecuted_blocks=1 00:17:45.013 00:17:45.013 ' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:45.013 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@223 -- # create_target_ns 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:45.013 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:45.014 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target0 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:45.273 09:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:45.273 10.0.0.1 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:45.273 10.0.0.2 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:45.273 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@151 -- # set_up target1 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772163 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:45.274 10.0.0.3 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.274 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772164 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:45.533 10.0.0.4 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:45.533 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:45.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:17:45.534 00:17:45.534 --- 10.0.0.1 ping statistics --- 00:17:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.534 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:45.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:17:45.534 00:17:45.534 --- 10.0.0.2 ping statistics --- 00:17:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.534 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:45.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:17:45.534 00:17:45.534 --- 10.0.0.3 ping statistics --- 00:17:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.534 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:45.534 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:45.534 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:17:45.534 00:17:45.534 --- 10.0.0.4 ping statistics --- 00:17:45.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.534 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # return 0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:45.534 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator0 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo initiator1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target0 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target0 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo target1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=target1 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87938 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:45.535 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87938 00:17:45.793 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 87938 ']' 00:17:45.793 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.793 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.793 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.793 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.793 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:45.793 [2024-11-20 09:11:24.518174] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:45.793 [2024-11-20 09:11:24.518280] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.793 [2024-11-20 09:11:24.673881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:46.050 [2024-11-20 09:11:24.732102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.050 [2024-11-20 09:11:24.732183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.050 [2024-11-20 09:11:24.732204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.050 [2024-11-20 09:11:24.732230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.050 [2024-11-20 09:11:24.732243] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.050 [2024-11-20 09:11:24.733677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.050 [2024-11-20 09:11:24.733838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.050 [2024-11-20 09:11:24.734907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.050 [2024-11-20 09:11:24.734925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.050 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:46.050 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:46.050 09:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:46.307 [2024-11-20 09:11:25.163988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.307 09:11:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:46.307 09:11:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:46.307 09:11:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:46.565 09:11:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:46.823 Malloc1 00:17:46.823 09:11:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:47.082 09:11:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:47.340 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:47.599 [2024-11-20 09:11:26.356854] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.599 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:47.857 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:47.858 09:11:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:17:48.117 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:48.117 fio-3.35 00:17:48.117 Starting 1 thread 00:17:50.732 00:17:50.732 test: (groupid=0, jobs=1): err= 0: pid=88061: Wed Nov 20 09:11:29 2024 00:17:50.732 read: IOPS=8995, BW=35.1MiB/s (36.8MB/s)(70.5MiB/2007msec) 00:17:50.732 slat (nsec): min=1936, max=254787, avg=2540.78, stdev=2951.97 00:17:50.732 clat (usec): min=2713, max=13336, avg=7440.69, stdev=613.08 00:17:50.732 lat (usec): min=2749, max=13338, avg=7443.23, stdev=612.97 00:17:50.732 clat percentiles (usec): 00:17:50.732 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6718], 20.00th=[ 6980], 00:17:50.732 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:17:50.732 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8160], 95.00th=[ 8455], 00:17:50.732 | 99.00th=[ 9110], 99.50th=[ 9503], 99.90th=[11076], 99.95th=[12387], 00:17:50.732 | 99.99th=[13173] 00:17:50.732 bw ( KiB/s): min=35456, max=36616, per=100.00%, avg=35982.00, stdev=597.68, samples=4 00:17:50.732 iops : min= 8864, max= 9154, avg=8995.50, stdev=149.42, samples=4 00:17:50.732 write: IOPS=9014, BW=35.2MiB/s (36.9MB/s)(70.7MiB/2007msec); 0 zone resets 00:17:50.732 slat (usec): min=2, max=194, avg= 2.63, stdev= 2.31 00:17:50.732 clat (usec): min=1945, max=13455, avg=6706.60, stdev=553.64 00:17:50.732 lat (usec): min=1956, max=13457, avg=6709.24, stdev=553.55 00:17:50.732 clat percentiles (usec): 00:17:50.732 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6325], 00:17:50.732 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6783], 00:17:50.732 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7504], 00:17:50.732 | 99.00th=[ 7898], 99.50th=[ 8225], 99.90th=[11994], 99.95th=[12780], 00:17:50.732 | 99.99th=[13435] 00:17:50.732 bw ( KiB/s): min=35720, max=36416, per=100.00%, avg=36056.00, stdev=294.23, samples=4 00:17:50.732 iops : min= 8930, max= 9104, avg=9014.00, stdev=73.56, samples=4 00:17:50.732 lat (msec) : 2=0.01%, 4=0.13%, 10=99.65%, 20=0.21% 00:17:50.732 cpu : usr=68.20%, sys=23.68%, ctx=6, majf=0, minf=7 00:17:50.732 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:50.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.732 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:50.732 issued rwts: total=18054,18092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.732 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:50.732 00:17:50.732 Run status group 0 (all jobs): 00:17:50.732 READ: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.5MiB (73.9MB), run=2007-2007msec 00:17:50.732 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=70.7MiB (74.1MB), run=2007-2007msec 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:50.732 09:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:17:50.733 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:50.733 fio-3.35 00:17:50.733 Starting 1 thread 00:17:53.266 00:17:53.266 test: (groupid=0, jobs=1): err= 0: pid=88105: Wed Nov 20 09:11:31 2024 00:17:53.266 read: IOPS=7954, BW=124MiB/s (130MB/s)(249MiB/2006msec) 00:17:53.266 slat (usec): min=2, max=126, avg= 3.83, stdev= 2.65 00:17:53.266 clat (usec): min=2761, max=21834, avg=9427.93, stdev=2473.67 00:17:53.266 lat (usec): min=2765, max=21838, avg=9431.75, stdev=2473.82 00:17:53.266 clat percentiles (usec): 00:17:53.266 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7308], 00:17:53.266 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:17:53.266 | 70.00th=[10552], 80.00th=[11600], 90.00th=[12518], 95.00th=[13698], 00:17:53.266 | 99.00th=[15926], 99.50th=[16712], 99.90th=[20841], 99.95th=[21365], 00:17:53.266 | 99.99th=[21627] 00:17:53.266 bw ( KiB/s): min=59488, max=72320, per=51.20%, avg=65168.00, stdev=5584.31, samples=4 00:17:53.266 iops : min= 3718, max= 4520, avg=4073.00, stdev=349.02, samples=4 00:17:53.266 write: IOPS=4522, BW=70.7MiB/s (74.1MB/s)(133MiB/1884msec); 0 zone resets 00:17:53.266 slat (usec): min=32, max=375, avg=38.95, stdev= 9.86 00:17:53.266 clat (usec): min=3107, max=19388, avg=11695.91, stdev=2086.26 00:17:53.266 lat (usec): min=3142, max=19424, avg=11734.85, stdev=2087.18 00:17:53.266 clat percentiles (usec): 00:17:53.266 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:17:53.266 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:17:53.266 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14484], 95.00th=[15533], 00:17:53.266 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19006], 99.95th=[19268], 00:17:53.266 | 99.99th=[19268] 00:17:53.266 bw ( KiB/s): min=61600, max=75904, per=93.82%, avg=67896.00, stdev=6161.33, samples=4 00:17:53.266 iops : min= 3850, max= 4744, avg=4243.50, stdev=385.08, samples=4 00:17:53.266 lat (msec) : 4=0.15%, 10=47.85%, 20=51.89%, 50=0.11% 00:17:53.266 cpu : usr=76.26%, sys=15.16%, ctx=45, majf=0, minf=6 00:17:53.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:53.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:53.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:53.266 issued rwts: total=15957,8521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:53.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:53.266 00:17:53.266 Run status group 0 (all jobs): 00:17:53.266 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=249MiB (261MB), run=2006-2006msec 00:17:53.266 WRITE: bw=70.7MiB/s (74.1MB/s), 70.7MiB/s-70.7MiB/s (74.1MB/s-74.1MB/s), io=133MiB (140MB), run=1884-1884msec 00:17:53.266 09:11:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:53.266 rmmod nvme_tcp 00:17:53.266 rmmod nvme_fabrics 00:17:53.266 rmmod nvme_keyring 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 87938 ']' 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 87938 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 87938 ']' 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 87938 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.266 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87938 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.525 killing process with pid 87938 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87938' 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 87938 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 87938 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:53.525 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # continue 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:17:53.785 00:17:53.785 real 0m8.872s 00:17:53.785 user 0m35.233s 00:17:53.785 sys 0m2.354s 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.785 ************************************ 00:17:53.785 END TEST nvmf_fio_host 00:17:53.785 ************************************ 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:53.785 ************************************ 00:17:53.785 START TEST nvmf_failover 00:17:53.785 ************************************ 00:17:53.785 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:53.785 * Looking for test storage... 00:17:54.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.046 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.047 --rc genhtml_branch_coverage=1 00:17:54.047 --rc genhtml_function_coverage=1 00:17:54.047 --rc genhtml_legend=1 00:17:54.047 --rc geninfo_all_blocks=1 00:17:54.047 --rc geninfo_unexecuted_blocks=1 00:17:54.047 00:17:54.047 ' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.047 --rc genhtml_branch_coverage=1 00:17:54.047 --rc genhtml_function_coverage=1 00:17:54.047 --rc genhtml_legend=1 00:17:54.047 --rc geninfo_all_blocks=1 00:17:54.047 --rc geninfo_unexecuted_blocks=1 00:17:54.047 00:17:54.047 ' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.047 --rc genhtml_branch_coverage=1 00:17:54.047 --rc genhtml_function_coverage=1 00:17:54.047 --rc genhtml_legend=1 00:17:54.047 --rc geninfo_all_blocks=1 00:17:54.047 --rc geninfo_unexecuted_blocks=1 00:17:54.047 00:17:54.047 ' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.047 --rc genhtml_branch_coverage=1 00:17:54.047 --rc genhtml_function_coverage=1 00:17:54.047 --rc genhtml_legend=1 00:17:54.047 --rc geninfo_all_blocks=1 00:17:54.047 --rc geninfo_unexecuted_blocks=1 00:17:54.047 00:17:54.047 ' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:54.047 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:17:54.047 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@280 -- # nvmf_veth_init 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@223 -- # create_target_ns 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # create_main_bridge 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@105 -- # delete_main_bridge 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator0 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target0 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0 up 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target0_br 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target0 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:17:54.048 10.0.0.1 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:17:54.048 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:17:54.049 10.0.0.2 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator0 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:17:54.049 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target0_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up initiator1 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:54.309 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.310 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:54.310 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:54.310 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:17:54.310 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:54.310 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.310 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:54.310 09:11:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@151 -- # set_up target1 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1 up 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # set_up target1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns target1 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772163 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:17:54.310 10.0.0.3 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772164 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:17:54.310 10.0.0.4 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up initiator1 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@129 -- # set_up target1_br 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 2 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:54.310 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:54.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:54.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:17:54.311 00:17:54.311 --- 10.0.0.1 ping statistics --- 00:17:54.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.311 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:54.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:54.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:17:54.311 00:17:54.311 --- 10.0.0.2 ping statistics --- 00:17:54.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.311 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:17:54.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:54.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:54.311 00:17:54.311 --- 10.0.0.3 ping statistics --- 00:17:54.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.311 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:17:54.311 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:54.311 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:17:54.311 00:17:54.311 --- 10.0.0.4 ping statistics --- 00:17:54.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:54.311 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # return 0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:17:54.311 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo initiator1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=initiator1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target0 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:54.571 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo target1 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=target1 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=88374 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 88374 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88374 ']' 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.572 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:54.572 [2024-11-20 09:11:33.373556] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:54.572 [2024-11-20 09:11:33.373642] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.830 [2024-11-20 09:11:33.521097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:54.830 [2024-11-20 09:11:33.583465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.830 [2024-11-20 09:11:33.583527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.830 [2024-11-20 09:11:33.583555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.830 [2024-11-20 09:11:33.583566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.830 [2024-11-20 09:11:33.583574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.830 [2024-11-20 09:11:33.584937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:54.830 [2024-11-20 09:11:33.585066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:54.830 [2024-11-20 09:11:33.585074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.830 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.830 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:54.830 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:54.830 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.830 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:55.089 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.089 09:11:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:55.347 [2024-11-20 09:11:34.066620] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:55.347 09:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:55.606 Malloc0 00:17:55.606 09:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.865 09:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.124 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.383 [2024-11-20 09:11:35.244312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.383 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:56.641 [2024-11-20 09:11:35.504498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:56.641 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:17:56.900 [2024-11-20 09:11:35.752696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88472 00:17:56.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88472 /var/tmp/bdevperf.sock 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88472 ']' 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.900 09:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.468 09:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.468 09:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:57.468 09:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:57.726 NVMe0n1 00:17:57.726 09:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:57.986 00:17:57.986 09:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88506 00:17:57.986 09:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:57.986 09:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:17:58.932 09:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.499 09:11:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:02.781 09:11:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:02.781 00:18:02.781 09:11:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:03.041 [2024-11-20 09:11:41.779523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.041 [2024-11-20 09:11:41.779799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.779997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.042 [2024-11-20 09:11:41.780502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 [2024-11-20 09:11:41.780662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e8600 is same with the state(6) to be set 00:18:03.043 09:11:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:06.329 09:11:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.329 [2024-11-20 09:11:45.090234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.329 09:11:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:07.264 09:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:07.524 [2024-11-20 09:11:46.384591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.384993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.385001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.385010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.385017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.385025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.385033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.524 [2024-11-20 09:11:46.385041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 [2024-11-20 09:11:46.385688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b32930 is same with the state(6) to be set 00:18:07.525 09:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88506 00:18:14.128 { 00:18:14.128 "results": [ 00:18:14.128 { 00:18:14.128 "job": "NVMe0n1", 00:18:14.128 "core_mask": "0x1", 00:18:14.128 "workload": "verify", 00:18:14.128 "status": "finished", 00:18:14.128 "verify_range": { 00:18:14.128 "start": 0, 00:18:14.128 "length": 16384 00:18:14.128 }, 00:18:14.128 "queue_depth": 128, 00:18:14.128 "io_size": 4096, 00:18:14.128 "runtime": 15.009481, 00:18:14.128 "iops": 8644.202954119466, 00:18:14.128 "mibps": 33.766417789529164, 00:18:14.128 "io_failed": 3333, 00:18:14.128 "io_timeout": 0, 00:18:14.128 "avg_latency_us": 14405.557666057774, 00:18:14.128 "min_latency_us": 722.3854545454545, 00:18:14.128 "max_latency_us": 25737.774545454544 00:18:14.128 } 00:18:14.128 ], 00:18:14.128 "core_count": 1 00:18:14.128 } 00:18:14.128 09:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88472 00:18:14.128 09:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88472 ']' 00:18:14.128 09:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88472 00:18:14.128 09:11:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88472 00:18:14.128 killing process with pid 88472 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88472' 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88472 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88472 00:18:14.128 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:14.128 [2024-11-20 09:11:35.822315] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:14.128 [2024-11-20 09:11:35.822452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88472 ] 00:18:14.128 [2024-11-20 09:11:35.967222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.128 [2024-11-20 09:11:36.013774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.128 Running I/O for 15 seconds... 00:18:14.128 9021.00 IOPS, 35.24 MiB/s [2024-11-20T09:11:53.047Z] [2024-11-20 09:11:38.124343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.128 [2024-11-20 09:11:38.124414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.128 [2024-11-20 09:11:38.124436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.128 [2024-11-20 09:11:38.124451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.128 [2024-11-20 09:11:38.124475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.128 [2024-11-20 09:11:38.124488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.128 [2024-11-20 09:11:38.124503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.128 [2024-11-20 09:11:38.124517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.128 [2024-11-20 09:11:38.124531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1815f30 is same with the state(6) to be set 00:18:14.129 [2024-11-20 09:11:38.124610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.124632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.124966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.124983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.129 [2024-11-20 09:11:38.125642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.129 [2024-11-20 09:11:38.125798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.129 [2024-11-20 09:11:38.125815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.125829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.125845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.125860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.125876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.125891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.125907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.125956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.125972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.125988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.130 [2024-11-20 09:11:38.126867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.130 [2024-11-20 09:11:38.126882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.126898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.126913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.126929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.126944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.126960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.126982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.126999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:84496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.131 [2024-11-20 09:11:38.127949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.131 [2024-11-20 09:11:38.127964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.127980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.127994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:84576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:84624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.132 [2024-11-20 09:11:38.128838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.132 [2024-11-20 09:11:38.128884] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.132 [2024-11-20 09:11:38.128896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84784 len:8 PRP1 0x0 PRP2 0x0 00:18:14.132 [2024-11-20 09:11:38.128910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.132 [2024-11-20 09:11:38.128972] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:14.132 [2024-11-20 09:11:38.128991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:14.132 [2024-11-20 09:11:38.132934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:14.132 [2024-11-20 09:11:38.132972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1815f30 (9): Bad file descriptor 00:18:14.132 [2024-11-20 09:11:38.163282] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:14.132 8812.50 IOPS, 34.42 MiB/s [2024-11-20T09:11:53.051Z] 8678.00 IOPS, 33.90 MiB/s [2024-11-20T09:11:53.051Z] 8781.75 IOPS, 34.30 MiB/s [2024-11-20T09:11:53.051Z] [2024-11-20 09:11:41.780122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.132 [2024-11-20 09:11:41.780174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.780192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.133 [2024-11-20 09:11:41.780272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.780287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.133 [2024-11-20 09:11:41.780300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.780314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.133 [2024-11-20 09:11:41.780327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.780340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1815f30 is same with the state(6) to be set 00:18:14.133 [2024-11-20 09:11:41.783034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.133 [2024-11-20 09:11:41.783683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.133 [2024-11-20 09:11:41.783714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.133 [2024-11-20 09:11:41.783744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.133 [2024-11-20 09:11:41.783784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.133 [2024-11-20 09:11:41.783814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.133 [2024-11-20 09:11:41.783831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.783848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.783862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.783878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.783893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.783909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.783922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.783938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.783952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.783976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.783991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.134 [2024-11-20 09:11:41.784819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.134 [2024-11-20 09:11:41.784835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.784849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.784865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.784879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.784895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.784908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.784925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.784939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.784954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.784968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.784996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:86488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:86568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:86616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.135 [2024-11-20 09:11:41.785737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.135 [2024-11-20 09:11:41.785807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86688 len:8 PRP1 0x0 PRP2 0x0 00:18:14.135 [2024-11-20 09:11:41.785821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.135 [2024-11-20 09:11:41.785850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.135 [2024-11-20 09:11:41.785861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86696 len:8 PRP1 0x0 PRP2 0x0 00:18:14.135 [2024-11-20 09:11:41.785875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.135 [2024-11-20 09:11:41.785889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.785899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.785909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86704 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.785935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.785951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.785961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.785972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86712 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.785985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.785999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86720 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86728 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86736 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86744 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86752 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86760 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86768 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86776 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86784 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86792 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86800 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86808 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86816 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86824 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86832 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86840 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.136 [2024-11-20 09:11:41.786818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86848 len:8 PRP1 0x0 PRP2 0x0 00:18:14.136 [2024-11-20 09:11:41.786832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.136 [2024-11-20 09:11:41.786845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.136 [2024-11-20 09:11:41.786855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.786870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86856 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.786884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.786898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.786908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.786919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86864 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.786932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.786946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.786956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.786974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86872 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.786989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.787003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.787013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.787023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86880 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.787036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.787050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.787060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.787071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86888 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.787084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.787098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.787108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.787118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86896 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.787132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.787145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.787155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86904 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86912 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86920 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86928 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86936 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86944 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86952 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86960 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86968 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.796954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.796964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.796974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86112 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.796987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.797001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.797012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.797022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86120 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.797036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.797049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.797059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.797070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86128 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.797083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.797105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.797116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.797126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86136 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.797139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.797153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.797164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.797174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86144 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.797187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.797201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.797211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.797222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86152 len:8 PRP1 0x0 PRP2 0x0 00:18:14.137 [2024-11-20 09:11:41.797235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.137 [2024-11-20 09:11:41.797249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.137 [2024-11-20 09:11:41.797259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.137 [2024-11-20 09:11:41.797269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86160 len:8 PRP1 0x0 PRP2 0x0 00:18:14.138 [2024-11-20 09:11:41.797283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:41.797353] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:14.138 [2024-11-20 09:11:41.797373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:14.138 [2024-11-20 09:11:41.797432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1815f30 (9): Bad file descriptor 00:18:14.138 [2024-11-20 09:11:41.803043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:14.138 [2024-11-20 09:11:41.826598] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:14.138 8712.40 IOPS, 34.03 MiB/s [2024-11-20T09:11:53.057Z] 8719.17 IOPS, 34.06 MiB/s [2024-11-20T09:11:53.057Z] 8779.71 IOPS, 34.30 MiB/s [2024-11-20T09:11:53.057Z] 8868.00 IOPS, 34.64 MiB/s [2024-11-20T09:11:53.057Z] 8938.22 IOPS, 34.91 MiB/s [2024-11-20T09:11:53.057Z] [2024-11-20 09:11:46.385927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.385984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:35760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.138 [2024-11-20 09:11:46.386778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.138 [2024-11-20 09:11:46.386964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.138 [2024-11-20 09:11:46.386982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.386996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.139 [2024-11-20 09:11:46.387576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.139 [2024-11-20 09:11:46.387607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.139 [2024-11-20 09:11:46.387647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.139 [2024-11-20 09:11:46.387678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.139 [2024-11-20 09:11:46.387708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.139 [2024-11-20 09:11:46.387738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.139 [2024-11-20 09:11:46.387793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.139 [2024-11-20 09:11:46.387810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.387825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.387840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.387855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.387871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.387885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.387901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.387914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.387930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.387945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.387970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.387985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.388973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.388989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.140 [2024-11-20 09:11:46.389003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.140 [2024-11-20 09:11:46.389025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.389980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.389996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.390010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.390026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.390041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.390057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.390071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.390088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.390102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.141 [2024-11-20 09:11:46.390118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.141 [2024-11-20 09:11:46.390133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.142 [2024-11-20 09:11:46.390170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.142 [2024-11-20 09:11:46.390200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:14.142 [2024-11-20 09:11:46.390231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.142 [2024-11-20 09:11:46.390322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.142 [2024-11-20 09:11:46.390334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36408 len:8 PRP1 0x0 PRP2 0x0 00:18:14.142 [2024-11-20 09:11:46.390348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390413] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:14.142 [2024-11-20 09:11:46.390487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.142 [2024-11-20 09:11:46.390510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.142 [2024-11-20 09:11:46.390540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.142 [2024-11-20 09:11:46.390568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.142 [2024-11-20 09:11:46.390596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.142 [2024-11-20 09:11:46.390611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:14.142 [2024-11-20 09:11:46.390648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1815f30 (9): Bad file descriptor 00:18:14.142 [2024-11-20 09:11:46.394587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:14.142 [2024-11-20 09:11:46.424321] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:14.142 8852.40 IOPS, 34.58 MiB/s [2024-11-20T09:11:53.061Z] 8785.45 IOPS, 34.32 MiB/s [2024-11-20T09:11:53.061Z] 8730.92 IOPS, 34.11 MiB/s [2024-11-20T09:11:53.061Z] 8701.46 IOPS, 33.99 MiB/s [2024-11-20T09:11:53.061Z] 8681.29 IOPS, 33.91 MiB/s [2024-11-20T09:11:53.061Z] 8641.47 IOPS, 33.76 MiB/s 00:18:14.142 Latency(us) 00:18:14.142 [2024-11-20T09:11:53.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.142 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:14.142 Verification LBA range: start 0x0 length 0x4000 00:18:14.142 NVMe0n1 : 15.01 8644.20 33.77 222.06 0.00 14405.56 722.39 25737.77 00:18:14.142 [2024-11-20T09:11:53.061Z] =================================================================================================================== 00:18:14.142 [2024-11-20T09:11:53.061Z] Total : 8644.20 33.77 222.06 0.00 14405.56 722.39 25737.77 00:18:14.142 Received shutdown signal, test time was about 15.000000 seconds 00:18:14.142 00:18:14.142 Latency(us) 00:18:14.142 [2024-11-20T09:11:53.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.142 [2024-11-20T09:11:53.061Z] =================================================================================================================== 00:18:14.142 [2024-11-20T09:11:53.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88714 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88714 /var/tmp/bdevperf.sock 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88714 ']' 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:14.142 [2024-11-20 09:11:52.878397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:14.142 09:11:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:14.401 [2024-11-20 09:11:53.162826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:14.401 09:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:14.659 NVMe0n1 00:18:14.659 09:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:15.225 00:18:15.225 09:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:15.483 00:18:15.483 09:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:15.483 09:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:15.741 09:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:15.999 09:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:19.282 09:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:19.282 09:11:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:19.282 09:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.282 09:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88834 00:18:19.282 09:11:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88834 00:18:20.658 { 00:18:20.658 "results": [ 00:18:20.658 { 00:18:20.658 "job": "NVMe0n1", 00:18:20.658 "core_mask": "0x1", 00:18:20.658 "workload": "verify", 00:18:20.658 "status": "finished", 00:18:20.658 "verify_range": { 00:18:20.658 "start": 0, 00:18:20.658 "length": 16384 00:18:20.658 }, 00:18:20.658 "queue_depth": 128, 00:18:20.658 "io_size": 4096, 00:18:20.658 "runtime": 1.016277, 00:18:20.658 "iops": 9341.941222717822, 00:18:20.658 "mibps": 36.49195790124149, 00:18:20.658 "io_failed": 0, 00:18:20.658 "io_timeout": 0, 00:18:20.658 "avg_latency_us": 13635.141352433116, 00:18:20.658 "min_latency_us": 2025.658181818182, 00:18:20.658 "max_latency_us": 15132.858181818181 00:18:20.658 } 00:18:20.658 ], 00:18:20.658 "core_count": 1 00:18:20.658 } 00:18:20.658 09:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:20.658 [2024-11-20 09:11:52.297342] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:20.658 [2024-11-20 09:11:52.297470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88714 ] 00:18:20.658 [2024-11-20 09:11:52.439973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.658 [2024-11-20 09:11:52.506811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.658 [2024-11-20 09:11:54.799905] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:20.658 [2024-11-20 09:11:54.800043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.658 [2024-11-20 09:11:54.800070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.658 [2024-11-20 09:11:54.800089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.658 [2024-11-20 09:11:54.800104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.658 [2024-11-20 09:11:54.800119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.658 [2024-11-20 09:11:54.800133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.658 [2024-11-20 09:11:54.800148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.658 [2024-11-20 09:11:54.800161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.658 [2024-11-20 09:11:54.800176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:20.658 [2024-11-20 09:11:54.800230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:20.658 [2024-11-20 09:11:54.800264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1439f30 (9): Bad file descriptor 00:18:20.658 [2024-11-20 09:11:54.811125] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:20.658 Running I/O for 1 seconds... 00:18:20.658 9270.00 IOPS, 36.21 MiB/s 00:18:20.658 Latency(us) 00:18:20.658 [2024-11-20T09:11:59.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.658 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:20.658 Verification LBA range: start 0x0 length 0x4000 00:18:20.658 NVMe0n1 : 1.02 9341.94 36.49 0.00 0.00 13635.14 2025.66 15132.86 00:18:20.658 [2024-11-20T09:11:59.577Z] =================================================================================================================== 00:18:20.658 [2024-11-20T09:11:59.577Z] Total : 9341.94 36.49 0.00 0.00 13635.14 2025.66 15132.86 00:18:20.658 09:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:20.658 09:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:20.916 09:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.175 09:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:21.175 09:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:21.175 09:12:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.433 09:12:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:24.719 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.719 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88714 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88714 ']' 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88714 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88714 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.978 killing process with pid 88714 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88714' 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88714 00:18:24.978 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88714 00:18:25.237 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:25.237 09:12:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:25.496 rmmod nvme_tcp 00:18:25.496 rmmod nvme_fabrics 00:18:25.496 rmmod nvme_keyring 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 88374 ']' 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 88374 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88374 ']' 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88374 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88374 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.496 killing process with pid 88374 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88374' 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88374 00:18:25.496 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88374 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:18:25.755 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # continue 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:18:26.014 00:18:26.014 real 0m32.163s 00:18:26.014 user 2m5.141s 00:18:26.014 sys 0m4.394s 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:26.014 ************************************ 00:18:26.014 END TEST nvmf_failover 00:18:26.014 ************************************ 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:26.014 ************************************ 00:18:26.014 START TEST nvmf_host_multipath_status 00:18:26.014 ************************************ 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:26.014 * Looking for test storage... 00:18:26.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:18:26.014 09:12:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:26.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.275 --rc genhtml_branch_coverage=1 00:18:26.275 --rc genhtml_function_coverage=1 00:18:26.275 --rc genhtml_legend=1 00:18:26.275 --rc geninfo_all_blocks=1 00:18:26.275 --rc geninfo_unexecuted_blocks=1 00:18:26.275 00:18:26.275 ' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:26.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.275 --rc genhtml_branch_coverage=1 00:18:26.275 --rc genhtml_function_coverage=1 00:18:26.275 --rc genhtml_legend=1 00:18:26.275 --rc geninfo_all_blocks=1 00:18:26.275 --rc geninfo_unexecuted_blocks=1 00:18:26.275 00:18:26.275 ' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:26.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.275 --rc genhtml_branch_coverage=1 00:18:26.275 --rc genhtml_function_coverage=1 00:18:26.275 --rc genhtml_legend=1 00:18:26.275 --rc geninfo_all_blocks=1 00:18:26.275 --rc geninfo_unexecuted_blocks=1 00:18:26.275 00:18:26.275 ' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:26.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.275 --rc genhtml_branch_coverage=1 00:18:26.275 --rc genhtml_function_coverage=1 00:18:26.275 --rc genhtml_legend=1 00:18:26.275 --rc geninfo_all_blocks=1 00:18:26.275 --rc geninfo_unexecuted_blocks=1 00:18:26.275 00:18:26.275 ' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.275 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:26.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@280 -- # nvmf_veth_init 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@223 -- # create_target_ns 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # create_main_bridge 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@105 -- # delete_main_bridge 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator0 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:18:26.276 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target0 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0 up 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target0 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:26.277 10.0.0.1 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:18:26.277 10.0.0.2 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator0 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:18:26.277 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target0_br 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up initiator1 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:18:26.537 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@151 -- # set_up target1 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1 up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # set_up target1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns target1 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772163 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:18:26.538 10.0.0.3 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772164 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:18:26.538 10.0.0.4 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up initiator1 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@129 -- # set_up target1_br 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 2 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:18:26.538 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:26.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:18:26.539 00:18:26.539 --- 10.0.0.1 ping statistics --- 00:18:26.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.539 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:26.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:18:26.539 00:18:26.539 --- 10.0.0.2 ping statistics --- 00:18:26.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.539 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:18:26.539 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:26.539 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:18:26.539 00:18:26.539 --- 10.0.0.3 ping statistics --- 00:18:26.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.539 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:18:26.539 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:26.539 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.108 ms 00:18:26.539 00:18:26.539 --- 10.0.0.4 ping statistics --- 00:18:26.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.539 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # return 0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:26.539 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator0 00:18:26.540 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator0 00:18:26.540 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:18:26.540 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:18:26.540 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:26.540 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:26.540 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:26.540 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo initiator1 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=initiator1 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:26.799 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target0 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target0 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo target1 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=target1 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=89197 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 89197 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89197 ']' 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.800 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:26.800 [2024-11-20 09:12:05.594934] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:26.800 [2024-11-20 09:12:05.595047] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:27.059 [2024-11-20 09:12:05.750889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:27.059 [2024-11-20 09:12:05.816742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:27.060 [2024-11-20 09:12:05.816833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:27.060 [2024-11-20 09:12:05.816858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:27.060 [2024-11-20 09:12:05.816869] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:27.060 [2024-11-20 09:12:05.816878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:27.060 [2024-11-20 09:12:05.818302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.060 [2024-11-20 09:12:05.818318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.060 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.060 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:27.060 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:27.060 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:27.060 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:27.318 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.318 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89197 00:18:27.318 09:12:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:27.576 [2024-11-20 09:12:06.306533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.576 09:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:27.835 Malloc0 00:18:27.835 09:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:28.094 09:12:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:28.352 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.611 [2024-11-20 09:12:07.508583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.870 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:28.870 [2024-11-20 09:12:07.772686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89293 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89293 /var/tmp/bdevperf.sock 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89293 ']' 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.129 09:12:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:30.066 09:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.066 09:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:30.066 09:12:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:30.323 09:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:30.890 Nvme0n1 00:18:30.890 09:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:31.147 Nvme0n1 00:18:31.147 09:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:31.147 09:12:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:33.056 09:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:33.056 09:12:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:33.623 09:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:33.881 09:12:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:34.816 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:34.816 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:34.816 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:34.816 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:35.075 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.075 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:35.075 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.075 09:12:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:35.337 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:35.337 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:35.337 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.337 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:35.599 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.599 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:35.599 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:35.599 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.861 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:35.861 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:35.861 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:35.861 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:36.123 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.123 09:12:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:36.123 09:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.123 09:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:36.691 09:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:36.691 09:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:36.691 09:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:36.691 09:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:36.949 09:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:38.324 09:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:38.324 09:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:38.324 09:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.324 09:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:38.324 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:38.324 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:38.324 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.324 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:38.583 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.583 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:38.583 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:38.583 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.841 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:38.841 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:38.841 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:38.841 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:39.100 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.100 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:39.100 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.100 09:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:39.358 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.358 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:39.358 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.358 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:39.617 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.617 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:39.617 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:39.876 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:40.134 09:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:41.073 09:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:41.073 09:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:41.074 09:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.074 09:12:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:41.640 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.640 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:41.640 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.640 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:41.939 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:41.939 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:41.939 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.939 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:42.216 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.216 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:42.216 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.216 09:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:42.216 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.216 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:42.216 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.217 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:42.784 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.784 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:42.784 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:42.784 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.784 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.784 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:42.784 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:43.043 09:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:43.301 09:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.688 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:44.948 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:44.948 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:44.948 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.948 09:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:45.206 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.206 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:45.206 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:45.206 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:45.465 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:45.465 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:45.465 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:45.465 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:46.031 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:46.031 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:46.031 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.031 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:46.031 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:46.031 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:46.031 09:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:46.289 09:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:46.547 09:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.920 09:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:48.179 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:48.179 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:48.179 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.179 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:48.438 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.438 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:48.438 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.438 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:48.697 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.697 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:48.697 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.697 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:48.955 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:48.955 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:48.955 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.955 09:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:49.213 09:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:49.213 09:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:49.213 09:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:49.780 09:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:49.780 09:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.156 09:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:51.413 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.413 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:51.413 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.413 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:51.671 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.671 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:51.671 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.671 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:51.930 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.930 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:51.930 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.930 09:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:52.195 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:52.195 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:52.195 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.195 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:52.761 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.761 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:52.761 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:52.762 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:18:53.327 09:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:53.327 09:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.698 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:54.957 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.957 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:54.957 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.957 09:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:55.216 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.216 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:55.216 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.216 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.784 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:56.351 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.351 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:56.351 09:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:56.610 09:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:56.869 09:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:57.805 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:57.805 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:57.805 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.806 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:58.065 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:58.065 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:58.065 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.065 09:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:58.325 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.325 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:58.325 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.325 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:58.583 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.583 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:58.583 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.583 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:58.841 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.841 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:58.841 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:58.841 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.099 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.099 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:59.099 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.099 09:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:59.665 09:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.665 09:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:59.665 09:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:59.665 09:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:18:59.923 09:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:01.297 09:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:01.297 09:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:01.297 09:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.297 09:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:01.297 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.297 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:01.297 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.297 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:01.555 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.555 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:01.555 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.555 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:01.814 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:01.814 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:01.814 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.814 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:02.072 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.072 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:02.072 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.072 09:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:02.331 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.331 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:02.331 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.331 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:02.590 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.590 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:02.590 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:02.849 09:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:03.421 09:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:04.363 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:04.363 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:04.363 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.363 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.621 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:04.621 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:04.621 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:04.621 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:04.880 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:04.880 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:04.880 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:04.880 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.138 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.138 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:05.138 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.138 09:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:05.397 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.397 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:05.397 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.397 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:05.655 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.655 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:05.655 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.655 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89293 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89293 ']' 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89293 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89293 00:19:05.914 killing process with pid 89293 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89293' 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89293 00:19:05.914 09:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89293 00:19:05.914 { 00:19:05.914 "results": [ 00:19:05.914 { 00:19:05.914 "job": "Nvme0n1", 00:19:05.914 "core_mask": "0x4", 00:19:05.914 "workload": "verify", 00:19:05.914 "status": "terminated", 00:19:05.914 "verify_range": { 00:19:05.914 "start": 0, 00:19:05.914 "length": 16384 00:19:05.914 }, 00:19:05.914 "queue_depth": 128, 00:19:05.914 "io_size": 4096, 00:19:05.914 "runtime": 34.713516, 00:19:05.914 "iops": 7898.681309032482, 00:19:05.914 "mibps": 30.854223863408134, 00:19:05.914 "io_failed": 0, 00:19:05.914 "io_timeout": 0, 00:19:05.914 "avg_latency_us": 16176.403543196995, 00:19:05.914 "min_latency_us": 221.55636363636364, 00:19:05.914 "max_latency_us": 4026531.84 00:19:05.914 } 00:19:05.914 ], 00:19:05.914 "core_count": 1 00:19:05.914 } 00:19:06.175 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89293 00:19:06.175 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:06.175 [2024-11-20 09:12:07.856491] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:19:06.175 [2024-11-20 09:12:07.856617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89293 ] 00:19:06.175 [2024-11-20 09:12:08.002498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.175 [2024-11-20 09:12:08.069366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.175 Running I/O for 90 seconds... 00:19:06.175 8797.00 IOPS, 34.36 MiB/s [2024-11-20T09:12:45.094Z] 8928.50 IOPS, 34.88 MiB/s [2024-11-20T09:12:45.094Z] 8874.00 IOPS, 34.66 MiB/s [2024-11-20T09:12:45.094Z] 8876.25 IOPS, 34.67 MiB/s [2024-11-20T09:12:45.094Z] 8817.40 IOPS, 34.44 MiB/s [2024-11-20T09:12:45.094Z] 8786.67 IOPS, 34.32 MiB/s [2024-11-20T09:12:45.094Z] 8726.29 IOPS, 34.09 MiB/s [2024-11-20T09:12:45.094Z] 8696.25 IOPS, 33.97 MiB/s [2024-11-20T09:12:45.094Z] 8679.00 IOPS, 33.90 MiB/s [2024-11-20T09:12:45.094Z] 8699.30 IOPS, 33.98 MiB/s [2024-11-20T09:12:45.094Z] 8676.18 IOPS, 33.89 MiB/s [2024-11-20T09:12:45.094Z] 8681.67 IOPS, 33.91 MiB/s [2024-11-20T09:12:45.094Z] 8694.31 IOPS, 33.96 MiB/s [2024-11-20T09:12:45.094Z] 8696.64 IOPS, 33.97 MiB/s [2024-11-20T09:12:45.094Z] 8689.33 IOPS, 33.94 MiB/s [2024-11-20T09:12:45.094Z] [2024-11-20 09:12:25.138041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.175 [2024-11-20 09:12:25.138109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.176 [2024-11-20 09:12:25.138477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.138973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.138998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.139957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.139979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.140007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.140031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.140047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.140069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.140084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.140104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.140119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.140139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.140159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.140180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.140194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.176 [2024-11-20 09:12:25.140215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.176 [2024-11-20 09:12:25.140229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.140966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.140981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.177 [2024-11-20 09:12:25.141860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:06.177 [2024-11-20 09:12:25.141883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.141897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.141929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.141962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.141986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.178 [2024-11-20 09:12:25.142922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.142958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.142982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.142996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:06.178 [2024-11-20 09:12:25.143386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.178 [2024-11-20 09:12:25.143402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:25.143719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.143976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.143990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.144015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.144030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:25.144056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:25.144072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:06.179 8173.69 IOPS, 31.93 MiB/s [2024-11-20T09:12:45.098Z] 7692.88 IOPS, 30.05 MiB/s [2024-11-20T09:12:45.098Z] 7265.50 IOPS, 28.38 MiB/s [2024-11-20T09:12:45.098Z] 6883.11 IOPS, 26.89 MiB/s [2024-11-20T09:12:45.098Z] 6971.30 IOPS, 27.23 MiB/s [2024-11-20T09:12:45.098Z] 7076.90 IOPS, 27.64 MiB/s [2024-11-20T09:12:45.098Z] 7172.64 IOPS, 28.02 MiB/s [2024-11-20T09:12:45.098Z] 7273.78 IOPS, 28.41 MiB/s [2024-11-20T09:12:45.098Z] 7368.71 IOPS, 28.78 MiB/s [2024-11-20T09:12:45.098Z] 7458.56 IOPS, 29.14 MiB/s [2024-11-20T09:12:45.098Z] 7529.19 IOPS, 29.41 MiB/s [2024-11-20T09:12:45.098Z] 7592.63 IOPS, 29.66 MiB/s [2024-11-20T09:12:45.098Z] 7648.18 IOPS, 29.88 MiB/s [2024-11-20T09:12:45.098Z] 7694.07 IOPS, 30.05 MiB/s [2024-11-20T09:12:45.098Z] 7745.03 IOPS, 30.25 MiB/s [2024-11-20T09:12:45.098Z] 7798.00 IOPS, 30.46 MiB/s [2024-11-20T09:12:45.098Z] [2024-11-20 09:12:42.031115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.031244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.031381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.031424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.031711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.031752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.031806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.031843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.031878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.031914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.031949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.031969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.031983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.032020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:06.179 [2024-11-20 09:12:42.032539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.032574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.032611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.032647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:06.179 [2024-11-20 09:12:42.032669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.179 [2024-11-20 09:12:42.032684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:06.180 [2024-11-20 09:12:42.032705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.180 [2024-11-20 09:12:42.032721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:06.180 [2024-11-20 09:12:42.032742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:06.180 [2024-11-20 09:12:42.032785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:06.180 7847.69 IOPS, 30.66 MiB/s [2024-11-20T09:12:45.099Z] 7867.88 IOPS, 30.73 MiB/s [2024-11-20T09:12:45.099Z] 7885.59 IOPS, 30.80 MiB/s [2024-11-20T09:12:45.099Z] Received shutdown signal, test time was about 34.714246 seconds 00:19:06.180 00:19:06.180 Latency(us) 00:19:06.180 [2024-11-20T09:12:45.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.180 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.180 Verification LBA range: start 0x0 length 0x4000 00:19:06.180 Nvme0n1 : 34.71 7898.68 30.85 0.00 0.00 16176.40 221.56 4026531.84 00:19:06.180 [2024-11-20T09:12:45.099Z] =================================================================================================================== 00:19:06.180 [2024-11-20T09:12:45.099Z] Total : 7898.68 30.85 0.00 0.00 16176.40 221.56 4026531.84 00:19:06.180 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:06.748 rmmod nvme_tcp 00:19:06.748 rmmod nvme_fabrics 00:19:06.748 rmmod nvme_keyring 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 89197 ']' 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 89197 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89197 ']' 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89197 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89197 00:19:06.748 killing process with pid 89197 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89197' 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89197 00:19:06.748 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89197 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:07.007 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # continue 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:07.267 00:19:07.267 real 0m41.095s 00:19:07.267 user 2m14.331s 00:19:07.267 sys 0m10.275s 00:19:07.267 ************************************ 00:19:07.267 END TEST nvmf_host_multipath_status 00:19:07.267 ************************************ 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.267 ************************************ 00:19:07.267 START TEST nvmf_identify_kernel_target 00:19:07.267 ************************************ 00:19:07.267 09:12:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:07.267 * Looking for test storage... 00:19:07.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.267 --rc genhtml_branch_coverage=1 00:19:07.267 --rc genhtml_function_coverage=1 00:19:07.267 --rc genhtml_legend=1 00:19:07.267 --rc geninfo_all_blocks=1 00:19:07.267 --rc geninfo_unexecuted_blocks=1 00:19:07.267 00:19:07.267 ' 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.267 --rc genhtml_branch_coverage=1 00:19:07.267 --rc genhtml_function_coverage=1 00:19:07.267 --rc genhtml_legend=1 00:19:07.267 --rc geninfo_all_blocks=1 00:19:07.267 --rc geninfo_unexecuted_blocks=1 00:19:07.267 00:19:07.267 ' 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.267 --rc genhtml_branch_coverage=1 00:19:07.267 --rc genhtml_function_coverage=1 00:19:07.267 --rc genhtml_legend=1 00:19:07.267 --rc geninfo_all_blocks=1 00:19:07.267 --rc geninfo_unexecuted_blocks=1 00:19:07.267 00:19:07.267 ' 00:19:07.267 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.267 --rc genhtml_branch_coverage=1 00:19:07.267 --rc genhtml_function_coverage=1 00:19:07.267 --rc genhtml_legend=1 00:19:07.267 --rc geninfo_all_blocks=1 00:19:07.267 --rc geninfo_unexecuted_blocks=1 00:19:07.267 00:19:07.268 ' 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.268 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:07.528 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@223 -- # create_target_ns 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:07.528 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target0 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:07.529 10.0.0.1 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:07.529 10.0.0.2 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:07.529 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@151 -- # set_up target1 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:07.530 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772163 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:07.791 10.0.0.3 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772164 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:07.791 10.0.0.4 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:07.791 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:07.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:07.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:19:07.792 00:19:07.792 --- 10.0.0.1 ping statistics --- 00:19:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.792 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:07.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:07.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:19:07.792 00:19:07.792 --- 10.0.0.2 ping statistics --- 00:19:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.792 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:07.792 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:07.792 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:07.792 00:19:07.792 --- 10.0.0.3 ping statistics --- 00:19:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.792 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:07.792 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:07.792 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:19:07.792 00:19:07.792 --- 10.0.0.4 ping statistics --- 00:19:07.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:07.792 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # return 0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:07.792 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo target1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=target1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo initiator0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:07.793 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:08.052 09:12:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:08.311 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:08.311 Waiting for block devices as requested 00:19:08.311 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:08.570 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:08.570 No valid GPT data, bailing 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:08.570 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:08.571 No valid GPT data, bailing 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:08.571 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:08.829 No valid GPT data, bailing 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:08.829 No valid GPT data, bailing 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -a 10.0.0.1 -t tcp -s 4420 00:19:08.829 00:19:08.829 Discovery Log Number of Records 2, Generation counter 2 00:19:08.829 =====Discovery Log Entry 0====== 00:19:08.829 trtype: tcp 00:19:08.829 adrfam: ipv4 00:19:08.829 subtype: current discovery subsystem 00:19:08.829 treq: not specified, sq flow control disable supported 00:19:08.829 portid: 1 00:19:08.829 trsvcid: 4420 00:19:08.829 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:08.829 traddr: 10.0.0.1 00:19:08.829 eflags: none 00:19:08.829 sectype: none 00:19:08.829 =====Discovery Log Entry 1====== 00:19:08.829 trtype: tcp 00:19:08.829 adrfam: ipv4 00:19:08.829 subtype: nvme subsystem 00:19:08.829 treq: not specified, sq flow control disable supported 00:19:08.829 portid: 1 00:19:08.829 trsvcid: 4420 00:19:08.829 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:08.829 traddr: 10.0.0.1 00:19:08.829 eflags: none 00:19:08.829 sectype: none 00:19:08.829 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:08.829 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:09.088 ===================================================== 00:19:09.088 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:09.088 ===================================================== 00:19:09.088 Controller Capabilities/Features 00:19:09.088 ================================ 00:19:09.088 Vendor ID: 0000 00:19:09.088 Subsystem Vendor ID: 0000 00:19:09.088 Serial Number: 18d5c93dbe08ad7799da 00:19:09.088 Model Number: Linux 00:19:09.088 Firmware Version: 6.8.9-20 00:19:09.088 Recommended Arb Burst: 0 00:19:09.088 IEEE OUI Identifier: 00 00 00 00:19:09.088 Multi-path I/O 00:19:09.088 May have multiple subsystem ports: No 00:19:09.088 May have multiple controllers: No 00:19:09.088 Associated with SR-IOV VF: No 00:19:09.088 Max Data Transfer Size: Unlimited 00:19:09.088 Max Number of Namespaces: 0 00:19:09.088 Max Number of I/O Queues: 1024 00:19:09.088 NVMe Specification Version (VS): 1.3 00:19:09.088 NVMe Specification Version (Identify): 1.3 00:19:09.088 Maximum Queue Entries: 1024 00:19:09.088 Contiguous Queues Required: No 00:19:09.088 Arbitration Mechanisms Supported 00:19:09.088 Weighted Round Robin: Not Supported 00:19:09.088 Vendor Specific: Not Supported 00:19:09.088 Reset Timeout: 7500 ms 00:19:09.088 Doorbell Stride: 4 bytes 00:19:09.088 NVM Subsystem Reset: Not Supported 00:19:09.088 Command Sets Supported 00:19:09.088 NVM Command Set: Supported 00:19:09.088 Boot Partition: Not Supported 00:19:09.088 Memory Page Size Minimum: 4096 bytes 00:19:09.088 Memory Page Size Maximum: 4096 bytes 00:19:09.088 Persistent Memory Region: Not Supported 00:19:09.088 Optional Asynchronous Events Supported 00:19:09.088 Namespace Attribute Notices: Not Supported 00:19:09.088 Firmware Activation Notices: Not Supported 00:19:09.088 ANA Change Notices: Not Supported 00:19:09.088 PLE Aggregate Log Change Notices: Not Supported 00:19:09.088 LBA Status Info Alert Notices: Not Supported 00:19:09.088 EGE Aggregate Log Change Notices: Not Supported 00:19:09.088 Normal NVM Subsystem Shutdown event: Not Supported 00:19:09.088 Zone Descriptor Change Notices: Not Supported 00:19:09.088 Discovery Log Change Notices: Supported 00:19:09.088 Controller Attributes 00:19:09.088 128-bit Host Identifier: Not Supported 00:19:09.088 Non-Operational Permissive Mode: Not Supported 00:19:09.088 NVM Sets: Not Supported 00:19:09.088 Read Recovery Levels: Not Supported 00:19:09.088 Endurance Groups: Not Supported 00:19:09.088 Predictable Latency Mode: Not Supported 00:19:09.088 Traffic Based Keep ALive: Not Supported 00:19:09.088 Namespace Granularity: Not Supported 00:19:09.088 SQ Associations: Not Supported 00:19:09.088 UUID List: Not Supported 00:19:09.088 Multi-Domain Subsystem: Not Supported 00:19:09.088 Fixed Capacity Management: Not Supported 00:19:09.088 Variable Capacity Management: Not Supported 00:19:09.088 Delete Endurance Group: Not Supported 00:19:09.088 Delete NVM Set: Not Supported 00:19:09.088 Extended LBA Formats Supported: Not Supported 00:19:09.089 Flexible Data Placement Supported: Not Supported 00:19:09.089 00:19:09.089 Controller Memory Buffer Support 00:19:09.089 ================================ 00:19:09.089 Supported: No 00:19:09.089 00:19:09.089 Persistent Memory Region Support 00:19:09.089 ================================ 00:19:09.089 Supported: No 00:19:09.089 00:19:09.089 Admin Command Set Attributes 00:19:09.089 ============================ 00:19:09.089 Security Send/Receive: Not Supported 00:19:09.089 Format NVM: Not Supported 00:19:09.089 Firmware Activate/Download: Not Supported 00:19:09.089 Namespace Management: Not Supported 00:19:09.089 Device Self-Test: Not Supported 00:19:09.089 Directives: Not Supported 00:19:09.089 NVMe-MI: Not Supported 00:19:09.089 Virtualization Management: Not Supported 00:19:09.089 Doorbell Buffer Config: Not Supported 00:19:09.089 Get LBA Status Capability: Not Supported 00:19:09.089 Command & Feature Lockdown Capability: Not Supported 00:19:09.089 Abort Command Limit: 1 00:19:09.089 Async Event Request Limit: 1 00:19:09.089 Number of Firmware Slots: N/A 00:19:09.089 Firmware Slot 1 Read-Only: N/A 00:19:09.089 Firmware Activation Without Reset: N/A 00:19:09.089 Multiple Update Detection Support: N/A 00:19:09.089 Firmware Update Granularity: No Information Provided 00:19:09.089 Per-Namespace SMART Log: No 00:19:09.089 Asymmetric Namespace Access Log Page: Not Supported 00:19:09.089 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:09.089 Command Effects Log Page: Not Supported 00:19:09.089 Get Log Page Extended Data: Supported 00:19:09.089 Telemetry Log Pages: Not Supported 00:19:09.089 Persistent Event Log Pages: Not Supported 00:19:09.089 Supported Log Pages Log Page: May Support 00:19:09.089 Commands Supported & Effects Log Page: Not Supported 00:19:09.089 Feature Identifiers & Effects Log Page:May Support 00:19:09.089 NVMe-MI Commands & Effects Log Page: May Support 00:19:09.089 Data Area 4 for Telemetry Log: Not Supported 00:19:09.089 Error Log Page Entries Supported: 1 00:19:09.089 Keep Alive: Not Supported 00:19:09.089 00:19:09.089 NVM Command Set Attributes 00:19:09.089 ========================== 00:19:09.089 Submission Queue Entry Size 00:19:09.089 Max: 1 00:19:09.089 Min: 1 00:19:09.089 Completion Queue Entry Size 00:19:09.089 Max: 1 00:19:09.089 Min: 1 00:19:09.089 Number of Namespaces: 0 00:19:09.089 Compare Command: Not Supported 00:19:09.089 Write Uncorrectable Command: Not Supported 00:19:09.089 Dataset Management Command: Not Supported 00:19:09.089 Write Zeroes Command: Not Supported 00:19:09.089 Set Features Save Field: Not Supported 00:19:09.089 Reservations: Not Supported 00:19:09.089 Timestamp: Not Supported 00:19:09.089 Copy: Not Supported 00:19:09.089 Volatile Write Cache: Not Present 00:19:09.089 Atomic Write Unit (Normal): 1 00:19:09.089 Atomic Write Unit (PFail): 1 00:19:09.089 Atomic Compare & Write Unit: 1 00:19:09.089 Fused Compare & Write: Not Supported 00:19:09.089 Scatter-Gather List 00:19:09.089 SGL Command Set: Supported 00:19:09.089 SGL Keyed: Not Supported 00:19:09.089 SGL Bit Bucket Descriptor: Not Supported 00:19:09.089 SGL Metadata Pointer: Not Supported 00:19:09.089 Oversized SGL: Not Supported 00:19:09.089 SGL Metadata Address: Not Supported 00:19:09.089 SGL Offset: Supported 00:19:09.089 Transport SGL Data Block: Not Supported 00:19:09.089 Replay Protected Memory Block: Not Supported 00:19:09.089 00:19:09.089 Firmware Slot Information 00:19:09.089 ========================= 00:19:09.089 Active slot: 0 00:19:09.089 00:19:09.089 00:19:09.089 Error Log 00:19:09.089 ========= 00:19:09.089 00:19:09.089 Active Namespaces 00:19:09.089 ================= 00:19:09.089 Discovery Log Page 00:19:09.089 ================== 00:19:09.089 Generation Counter: 2 00:19:09.089 Number of Records: 2 00:19:09.089 Record Format: 0 00:19:09.089 00:19:09.089 Discovery Log Entry 0 00:19:09.089 ---------------------- 00:19:09.089 Transport Type: 3 (TCP) 00:19:09.089 Address Family: 1 (IPv4) 00:19:09.089 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:09.089 Entry Flags: 00:19:09.089 Duplicate Returned Information: 0 00:19:09.089 Explicit Persistent Connection Support for Discovery: 0 00:19:09.089 Transport Requirements: 00:19:09.089 Secure Channel: Not Specified 00:19:09.089 Port ID: 1 (0x0001) 00:19:09.089 Controller ID: 65535 (0xffff) 00:19:09.089 Admin Max SQ Size: 32 00:19:09.089 Transport Service Identifier: 4420 00:19:09.089 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:09.089 Transport Address: 10.0.0.1 00:19:09.089 Discovery Log Entry 1 00:19:09.089 ---------------------- 00:19:09.089 Transport Type: 3 (TCP) 00:19:09.089 Address Family: 1 (IPv4) 00:19:09.089 Subsystem Type: 2 (NVM Subsystem) 00:19:09.089 Entry Flags: 00:19:09.089 Duplicate Returned Information: 0 00:19:09.089 Explicit Persistent Connection Support for Discovery: 0 00:19:09.089 Transport Requirements: 00:19:09.089 Secure Channel: Not Specified 00:19:09.089 Port ID: 1 (0x0001) 00:19:09.089 Controller ID: 65535 (0xffff) 00:19:09.089 Admin Max SQ Size: 32 00:19:09.089 Transport Service Identifier: 4420 00:19:09.089 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:09.089 Transport Address: 10.0.0.1 00:19:09.089 09:12:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:09.349 get_feature(0x01) failed 00:19:09.349 get_feature(0x02) failed 00:19:09.349 get_feature(0x04) failed 00:19:09.349 ===================================================== 00:19:09.349 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:09.349 ===================================================== 00:19:09.349 Controller Capabilities/Features 00:19:09.349 ================================ 00:19:09.349 Vendor ID: 0000 00:19:09.349 Subsystem Vendor ID: 0000 00:19:09.349 Serial Number: e2c97e55af98ee67be67 00:19:09.349 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:09.349 Firmware Version: 6.8.9-20 00:19:09.349 Recommended Arb Burst: 6 00:19:09.349 IEEE OUI Identifier: 00 00 00 00:19:09.349 Multi-path I/O 00:19:09.349 May have multiple subsystem ports: Yes 00:19:09.349 May have multiple controllers: Yes 00:19:09.349 Associated with SR-IOV VF: No 00:19:09.349 Max Data Transfer Size: Unlimited 00:19:09.350 Max Number of Namespaces: 1024 00:19:09.350 Max Number of I/O Queues: 128 00:19:09.350 NVMe Specification Version (VS): 1.3 00:19:09.350 NVMe Specification Version (Identify): 1.3 00:19:09.350 Maximum Queue Entries: 1024 00:19:09.350 Contiguous Queues Required: No 00:19:09.350 Arbitration Mechanisms Supported 00:19:09.350 Weighted Round Robin: Not Supported 00:19:09.350 Vendor Specific: Not Supported 00:19:09.350 Reset Timeout: 7500 ms 00:19:09.350 Doorbell Stride: 4 bytes 00:19:09.350 NVM Subsystem Reset: Not Supported 00:19:09.350 Command Sets Supported 00:19:09.350 NVM Command Set: Supported 00:19:09.350 Boot Partition: Not Supported 00:19:09.350 Memory Page Size Minimum: 4096 bytes 00:19:09.350 Memory Page Size Maximum: 4096 bytes 00:19:09.350 Persistent Memory Region: Not Supported 00:19:09.350 Optional Asynchronous Events Supported 00:19:09.350 Namespace Attribute Notices: Supported 00:19:09.350 Firmware Activation Notices: Not Supported 00:19:09.350 ANA Change Notices: Supported 00:19:09.350 PLE Aggregate Log Change Notices: Not Supported 00:19:09.350 LBA Status Info Alert Notices: Not Supported 00:19:09.350 EGE Aggregate Log Change Notices: Not Supported 00:19:09.350 Normal NVM Subsystem Shutdown event: Not Supported 00:19:09.350 Zone Descriptor Change Notices: Not Supported 00:19:09.350 Discovery Log Change Notices: Not Supported 00:19:09.350 Controller Attributes 00:19:09.350 128-bit Host Identifier: Supported 00:19:09.350 Non-Operational Permissive Mode: Not Supported 00:19:09.350 NVM Sets: Not Supported 00:19:09.350 Read Recovery Levels: Not Supported 00:19:09.350 Endurance Groups: Not Supported 00:19:09.350 Predictable Latency Mode: Not Supported 00:19:09.350 Traffic Based Keep ALive: Supported 00:19:09.350 Namespace Granularity: Not Supported 00:19:09.350 SQ Associations: Not Supported 00:19:09.350 UUID List: Not Supported 00:19:09.350 Multi-Domain Subsystem: Not Supported 00:19:09.350 Fixed Capacity Management: Not Supported 00:19:09.350 Variable Capacity Management: Not Supported 00:19:09.350 Delete Endurance Group: Not Supported 00:19:09.350 Delete NVM Set: Not Supported 00:19:09.350 Extended LBA Formats Supported: Not Supported 00:19:09.350 Flexible Data Placement Supported: Not Supported 00:19:09.350 00:19:09.350 Controller Memory Buffer Support 00:19:09.350 ================================ 00:19:09.350 Supported: No 00:19:09.350 00:19:09.350 Persistent Memory Region Support 00:19:09.350 ================================ 00:19:09.350 Supported: No 00:19:09.350 00:19:09.350 Admin Command Set Attributes 00:19:09.350 ============================ 00:19:09.350 Security Send/Receive: Not Supported 00:19:09.350 Format NVM: Not Supported 00:19:09.350 Firmware Activate/Download: Not Supported 00:19:09.350 Namespace Management: Not Supported 00:19:09.350 Device Self-Test: Not Supported 00:19:09.350 Directives: Not Supported 00:19:09.350 NVMe-MI: Not Supported 00:19:09.350 Virtualization Management: Not Supported 00:19:09.350 Doorbell Buffer Config: Not Supported 00:19:09.350 Get LBA Status Capability: Not Supported 00:19:09.350 Command & Feature Lockdown Capability: Not Supported 00:19:09.350 Abort Command Limit: 4 00:19:09.350 Async Event Request Limit: 4 00:19:09.350 Number of Firmware Slots: N/A 00:19:09.350 Firmware Slot 1 Read-Only: N/A 00:19:09.350 Firmware Activation Without Reset: N/A 00:19:09.350 Multiple Update Detection Support: N/A 00:19:09.350 Firmware Update Granularity: No Information Provided 00:19:09.350 Per-Namespace SMART Log: Yes 00:19:09.350 Asymmetric Namespace Access Log Page: Supported 00:19:09.350 ANA Transition Time : 10 sec 00:19:09.350 00:19:09.350 Asymmetric Namespace Access Capabilities 00:19:09.350 ANA Optimized State : Supported 00:19:09.350 ANA Non-Optimized State : Supported 00:19:09.350 ANA Inaccessible State : Supported 00:19:09.350 ANA Persistent Loss State : Supported 00:19:09.350 ANA Change State : Supported 00:19:09.350 ANAGRPID is not changed : No 00:19:09.350 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:09.350 00:19:09.350 ANA Group Identifier Maximum : 128 00:19:09.350 Number of ANA Group Identifiers : 128 00:19:09.350 Max Number of Allowed Namespaces : 1024 00:19:09.350 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:09.350 Command Effects Log Page: Supported 00:19:09.350 Get Log Page Extended Data: Supported 00:19:09.350 Telemetry Log Pages: Not Supported 00:19:09.350 Persistent Event Log Pages: Not Supported 00:19:09.350 Supported Log Pages Log Page: May Support 00:19:09.350 Commands Supported & Effects Log Page: Not Supported 00:19:09.350 Feature Identifiers & Effects Log Page:May Support 00:19:09.350 NVMe-MI Commands & Effects Log Page: May Support 00:19:09.350 Data Area 4 for Telemetry Log: Not Supported 00:19:09.350 Error Log Page Entries Supported: 128 00:19:09.350 Keep Alive: Supported 00:19:09.350 Keep Alive Granularity: 1000 ms 00:19:09.350 00:19:09.350 NVM Command Set Attributes 00:19:09.350 ========================== 00:19:09.350 Submission Queue Entry Size 00:19:09.350 Max: 64 00:19:09.350 Min: 64 00:19:09.350 Completion Queue Entry Size 00:19:09.350 Max: 16 00:19:09.350 Min: 16 00:19:09.350 Number of Namespaces: 1024 00:19:09.350 Compare Command: Not Supported 00:19:09.350 Write Uncorrectable Command: Not Supported 00:19:09.350 Dataset Management Command: Supported 00:19:09.350 Write Zeroes Command: Supported 00:19:09.350 Set Features Save Field: Not Supported 00:19:09.350 Reservations: Not Supported 00:19:09.350 Timestamp: Not Supported 00:19:09.350 Copy: Not Supported 00:19:09.350 Volatile Write Cache: Present 00:19:09.350 Atomic Write Unit (Normal): 1 00:19:09.350 Atomic Write Unit (PFail): 1 00:19:09.350 Atomic Compare & Write Unit: 1 00:19:09.350 Fused Compare & Write: Not Supported 00:19:09.350 Scatter-Gather List 00:19:09.350 SGL Command Set: Supported 00:19:09.350 SGL Keyed: Not Supported 00:19:09.350 SGL Bit Bucket Descriptor: Not Supported 00:19:09.350 SGL Metadata Pointer: Not Supported 00:19:09.350 Oversized SGL: Not Supported 00:19:09.350 SGL Metadata Address: Not Supported 00:19:09.350 SGL Offset: Supported 00:19:09.350 Transport SGL Data Block: Not Supported 00:19:09.350 Replay Protected Memory Block: Not Supported 00:19:09.350 00:19:09.350 Firmware Slot Information 00:19:09.350 ========================= 00:19:09.350 Active slot: 0 00:19:09.350 00:19:09.350 Asymmetric Namespace Access 00:19:09.350 =========================== 00:19:09.350 Change Count : 0 00:19:09.350 Number of ANA Group Descriptors : 1 00:19:09.350 ANA Group Descriptor : 0 00:19:09.350 ANA Group ID : 1 00:19:09.350 Number of NSID Values : 1 00:19:09.350 Change Count : 0 00:19:09.350 ANA State : 1 00:19:09.350 Namespace Identifier : 1 00:19:09.350 00:19:09.350 Commands Supported and Effects 00:19:09.350 ============================== 00:19:09.350 Admin Commands 00:19:09.350 -------------- 00:19:09.350 Get Log Page (02h): Supported 00:19:09.350 Identify (06h): Supported 00:19:09.350 Abort (08h): Supported 00:19:09.350 Set Features (09h): Supported 00:19:09.350 Get Features (0Ah): Supported 00:19:09.350 Asynchronous Event Request (0Ch): Supported 00:19:09.350 Keep Alive (18h): Supported 00:19:09.350 I/O Commands 00:19:09.350 ------------ 00:19:09.350 Flush (00h): Supported 00:19:09.350 Write (01h): Supported LBA-Change 00:19:09.350 Read (02h): Supported 00:19:09.350 Write Zeroes (08h): Supported LBA-Change 00:19:09.350 Dataset Management (09h): Supported 00:19:09.350 00:19:09.350 Error Log 00:19:09.350 ========= 00:19:09.350 Entry: 0 00:19:09.350 Error Count: 0x3 00:19:09.350 Submission Queue Id: 0x0 00:19:09.350 Command Id: 0x5 00:19:09.350 Phase Bit: 0 00:19:09.350 Status Code: 0x2 00:19:09.350 Status Code Type: 0x0 00:19:09.350 Do Not Retry: 1 00:19:09.350 Error Location: 0x28 00:19:09.350 LBA: 0x0 00:19:09.350 Namespace: 0x0 00:19:09.350 Vendor Log Page: 0x0 00:19:09.350 ----------- 00:19:09.350 Entry: 1 00:19:09.350 Error Count: 0x2 00:19:09.350 Submission Queue Id: 0x0 00:19:09.350 Command Id: 0x5 00:19:09.350 Phase Bit: 0 00:19:09.350 Status Code: 0x2 00:19:09.350 Status Code Type: 0x0 00:19:09.350 Do Not Retry: 1 00:19:09.350 Error Location: 0x28 00:19:09.350 LBA: 0x0 00:19:09.350 Namespace: 0x0 00:19:09.350 Vendor Log Page: 0x0 00:19:09.350 ----------- 00:19:09.350 Entry: 2 00:19:09.350 Error Count: 0x1 00:19:09.350 Submission Queue Id: 0x0 00:19:09.350 Command Id: 0x4 00:19:09.350 Phase Bit: 0 00:19:09.351 Status Code: 0x2 00:19:09.351 Status Code Type: 0x0 00:19:09.351 Do Not Retry: 1 00:19:09.351 Error Location: 0x28 00:19:09.351 LBA: 0x0 00:19:09.351 Namespace: 0x0 00:19:09.351 Vendor Log Page: 0x0 00:19:09.351 00:19:09.351 Number of Queues 00:19:09.351 ================ 00:19:09.351 Number of I/O Submission Queues: 128 00:19:09.351 Number of I/O Completion Queues: 128 00:19:09.351 00:19:09.351 ZNS Specific Controller Data 00:19:09.351 ============================ 00:19:09.351 Zone Append Size Limit: 0 00:19:09.351 00:19:09.351 00:19:09.351 Active Namespaces 00:19:09.351 ================= 00:19:09.351 get_feature(0x05) failed 00:19:09.351 Namespace ID:1 00:19:09.351 Command Set Identifier: NVM (00h) 00:19:09.351 Deallocate: Supported 00:19:09.351 Deallocated/Unwritten Error: Not Supported 00:19:09.351 Deallocated Read Value: Unknown 00:19:09.351 Deallocate in Write Zeroes: Not Supported 00:19:09.351 Deallocated Guard Field: 0xFFFF 00:19:09.351 Flush: Supported 00:19:09.351 Reservation: Not Supported 00:19:09.351 Namespace Sharing Capabilities: Multiple Controllers 00:19:09.351 Size (in LBAs): 1310720 (5GiB) 00:19:09.351 Capacity (in LBAs): 1310720 (5GiB) 00:19:09.351 Utilization (in LBAs): 1310720 (5GiB) 00:19:09.351 UUID: c9b8b096-31d2-4b22-a62e-c387506da27c 00:19:09.351 Thin Provisioning: Not Supported 00:19:09.351 Per-NS Atomic Units: Yes 00:19:09.351 Atomic Boundary Size (Normal): 0 00:19:09.351 Atomic Boundary Size (PFail): 0 00:19:09.351 Atomic Boundary Offset: 0 00:19:09.351 NGUID/EUI64 Never Reused: No 00:19:09.351 ANA group ID: 1 00:19:09.351 Namespace Write Protected: No 00:19:09.351 Number of LBA Formats: 1 00:19:09.351 Current LBA Format: LBA Format #00 00:19:09.351 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:09.351 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:09.351 rmmod nvme_tcp 00:19:09.351 rmmod nvme_fabrics 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:09.351 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:09.610 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # continue 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:19:09.611 09:12:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:10.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:10.438 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:10.438 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:10.438 ************************************ 00:19:10.438 END TEST nvmf_identify_kernel_target 00:19:10.438 ************************************ 00:19:10.438 00:19:10.438 real 0m3.277s 00:19:10.438 user 0m1.205s 00:19:10.438 sys 0m1.511s 00:19:10.438 09:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.438 09:12:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.438 09:12:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:10.438 09:12:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.438 09:12:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.438 09:12:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:10.438 ************************************ 00:19:10.438 START TEST nvmf_auth_host 00:19:10.438 ************************************ 00:19:10.438 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:10.699 * Looking for test storage... 00:19:10.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.699 --rc genhtml_branch_coverage=1 00:19:10.699 --rc genhtml_function_coverage=1 00:19:10.699 --rc genhtml_legend=1 00:19:10.699 --rc geninfo_all_blocks=1 00:19:10.699 --rc geninfo_unexecuted_blocks=1 00:19:10.699 00:19:10.699 ' 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.699 --rc genhtml_branch_coverage=1 00:19:10.699 --rc genhtml_function_coverage=1 00:19:10.699 --rc genhtml_legend=1 00:19:10.699 --rc geninfo_all_blocks=1 00:19:10.699 --rc geninfo_unexecuted_blocks=1 00:19:10.699 00:19:10.699 ' 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.699 --rc genhtml_branch_coverage=1 00:19:10.699 --rc genhtml_function_coverage=1 00:19:10.699 --rc genhtml_legend=1 00:19:10.699 --rc geninfo_all_blocks=1 00:19:10.699 --rc geninfo_unexecuted_blocks=1 00:19:10.699 00:19:10.699 ' 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.699 --rc genhtml_branch_coverage=1 00:19:10.699 --rc genhtml_function_coverage=1 00:19:10.699 --rc genhtml_legend=1 00:19:10.699 --rc geninfo_all_blocks=1 00:19:10.699 --rc geninfo_unexecuted_blocks=1 00:19:10.699 00:19:10.699 ' 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:19:10.699 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:10.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@223 -- # create_target_ns 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:10.700 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:10.961 10.0.0.1 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:10.961 10.0.0.2 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:10.961 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@151 -- # set_up target1 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772163 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:10.962 10.0.0.3 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772164 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:10.962 10.0.0.4 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:10.962 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:10.963 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:11.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:19:11.223 00:19:11.223 --- 10.0.0.1 ping statistics --- 00:19:11.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.223 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:11.223 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:11.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:19:11.224 00:19:11.224 --- 10.0.0.2 ping statistics --- 00:19:11.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.224 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:11.224 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:11.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:11.224 00:19:11.224 --- 10.0.0.3 ping statistics --- 00:19:11.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.224 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:11.224 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:11.224 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:19:11.224 00:19:11.224 --- 10.0.0.4 ping statistics --- 00:19:11.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.224 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # return 0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target0 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:11.224 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:11.225 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:11.225 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:11.225 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:11.225 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:11.225 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:19:11.225 09:12:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo target1 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=target1 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=91199 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 91199 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91199 ']' 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.225 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c24d693a15a6efd32ef30198896db5ec 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.P5K 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c24d693a15a6efd32ef30198896db5ec 0 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c24d693a15a6efd32ef30198896db5ec 0 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c24d693a15a6efd32ef30198896db5ec 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.P5K 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.P5K 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.P5K 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=3445b62d056c2562bb5f78d2451df27fc17dcc30a9c5ded7d73a80ccf3af415b 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.wI8 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 3445b62d056c2562bb5f78d2451df27fc17dcc30a9c5ded7d73a80ccf3af415b 3 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 3445b62d056c2562bb5f78d2451df27fc17dcc30a9c5ded7d73a80ccf3af415b 3 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=3445b62d056c2562bb5f78d2451df27fc17dcc30a9c5ded7d73a80ccf3af415b 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.wI8 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.wI8 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wI8 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=e123fa2fb24a44e5c2a36752a6bae2d438fbac2aa2db3545 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.RSS 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key e123fa2fb24a44e5c2a36752a6bae2d438fbac2aa2db3545 0 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 e123fa2fb24a44e5c2a36752a6bae2d438fbac2aa2db3545 0 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=e123fa2fb24a44e5c2a36752a6bae2d438fbac2aa2db3545 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.RSS 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.RSS 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.RSS 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:11.794 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=2b7936d5d27c2de3fe6bc05f5d000a7f24d0fa539eb711f8 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.igT 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 2b7936d5d27c2de3fe6bc05f5d000a7f24d0fa539eb711f8 2 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 2b7936d5d27c2de3fe6bc05f5d000a7f24d0fa539eb711f8 2 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=2b7936d5d27c2de3fe6bc05f5d000a7f24d0fa539eb711f8 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.igT 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.igT 00:19:12.054 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.igT 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=8d7c6fed7ebf1c6b9dd46a0d3b6a40e1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.KTw 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 8d7c6fed7ebf1c6b9dd46a0d3b6a40e1 1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 8d7c6fed7ebf1c6b9dd46a0d3b6a40e1 1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=8d7c6fed7ebf1c6b9dd46a0d3b6a40e1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.KTw 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.KTw 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.KTw 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=ceadc9640515704b532f0ecee7fab0b1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.UEM 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key ceadc9640515704b532f0ecee7fab0b1 1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 ceadc9640515704b532f0ecee7fab0b1 1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=ceadc9640515704b532f0ecee7fab0b1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.UEM 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.UEM 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UEM 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=4db0ff0c66ca2164357e6ff4b781f2a121a3518b03703c08 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.yei 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 4db0ff0c66ca2164357e6ff4b781f2a121a3518b03703c08 2 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 4db0ff0c66ca2164357e6ff4b781f2a121a3518b03703c08 2 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=4db0ff0c66ca2164357e6ff4b781f2a121a3518b03703c08 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:19:12.055 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.yei 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.yei 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.yei 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=60fbd7ef737c3021ea220e8da331e3ee 00:19:12.315 09:12:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.BEj 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 60fbd7ef737c3021ea220e8da331e3ee 0 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 60fbd7ef737c3021ea220e8da331e3ee 0 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=60fbd7ef737c3021ea220e8da331e3ee 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.BEj 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.BEj 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.BEj 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=dc7e9b96e047c8d6ba915885eabe3e1f25749c86542421cc0108e3e1979cc177 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.25w 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key dc7e9b96e047c8d6ba915885eabe3e1f25749c86542421cc0108e3e1979cc177 3 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 dc7e9b96e047c8d6ba915885eabe3e1f25749c86542421cc0108e3e1979cc177 3 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=dc7e9b96e047c8d6ba915885eabe3e1f25749c86542421cc0108e3e1979cc177 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.25w 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.25w 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.25w 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91199 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91199 ']' 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.315 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.P5K 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wI8 ]] 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wI8 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.RSS 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.574 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.igT ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.igT 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.KTw 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UEM ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UEM 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.yei 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.BEj ]] 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.BEj 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.833 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.25w 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:12.834 09:12:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:13.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.093 Waiting for block devices as requested 00:19:13.093 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.412 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:13.981 No valid GPT data, bailing 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:13.981 No valid GPT data, bailing 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:13.981 No valid GPT data, bailing 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:13.981 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:14.240 No valid GPT data, bailing 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:14.240 09:12:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -a 10.0.0.1 -t tcp -s 4420 00:19:14.240 00:19:14.240 Discovery Log Number of Records 2, Generation counter 2 00:19:14.240 =====Discovery Log Entry 0====== 00:19:14.240 trtype: tcp 00:19:14.240 adrfam: ipv4 00:19:14.240 subtype: current discovery subsystem 00:19:14.240 treq: not specified, sq flow control disable supported 00:19:14.240 portid: 1 00:19:14.240 trsvcid: 4420 00:19:14.240 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:14.240 traddr: 10.0.0.1 00:19:14.240 eflags: none 00:19:14.240 sectype: none 00:19:14.240 =====Discovery Log Entry 1====== 00:19:14.240 trtype: tcp 00:19:14.240 adrfam: ipv4 00:19:14.240 subtype: nvme subsystem 00:19:14.240 treq: not specified, sq flow control disable supported 00:19:14.240 portid: 1 00:19:14.240 trsvcid: 4420 00:19:14.240 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:14.240 traddr: 10.0.0.1 00:19:14.240 eflags: none 00:19:14.240 sectype: none 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:14.240 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:14.241 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.500 nvme0n1 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:14.500 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.501 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.760 nvme0n1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.760 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:14.760 nvme0n1 00:19:14.761 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.761 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:14.761 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:14.761 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.761 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.020 nvme0n1 00:19:15.020 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.021 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.280 09:12:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.280 nvme0n1 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:15.280 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.281 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.539 nvme0n1 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:15.539 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.798 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.056 nvme0n1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.056 09:12:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.315 nvme0n1 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:16.315 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.316 nvme0n1 00:19:16.316 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 nvme0n1 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.575 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 nvme0n1 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:16.835 09:12:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.772 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.773 nvme0n1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.773 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.042 nvme0n1 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.042 09:12:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.300 nvme0n1 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:18.300 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 nvme0n1 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.559 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:18.820 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.821 nvme0n1 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:18.821 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:19.080 09:12:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.984 nvme0n1 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.984 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:21.243 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.244 09:12:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.502 nvme0n1 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.503 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:21.762 nvme0n1 00:19:21.762 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.762 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:21.762 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:21.762 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.762 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.021 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.022 09:13:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.281 nvme0n1 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:22.281 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.540 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.799 nvme0n1 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.799 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.800 09:13:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.368 nvme0n1 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:23.368 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.627 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.195 nvme0n1 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.195 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.196 09:13:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.765 nvme0n1 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:24.765 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.766 09:13:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.339 nvme0n1 00:19:25.339 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.339 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:25.339 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:25.339 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.339 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.339 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:25.598 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.599 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.166 nvme0n1 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:26.166 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.167 09:13:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.426 nvme0n1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.426 nvme0n1 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.426 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:26.686 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.687 nvme0n1 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.687 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.946 nvme0n1 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:26.946 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.947 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.207 nvme0n1 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.207 09:13:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.207 nvme0n1 00:19:27.207 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.207 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.207 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.207 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.207 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.207 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 nvme0n1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:27.467 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.468 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.468 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.726 nvme0n1 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.726 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.727 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.986 nvme0n1 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:27.986 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.987 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.246 nvme0n1 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.246 09:13:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.246 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.504 nvme0n1 00:19:28.504 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.504 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.504 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.504 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.504 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.505 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.763 nvme0n1 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.763 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.022 nvme0n1 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:29.022 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:29.023 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:29.023 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.023 09:13:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 nvme0n1 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.281 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.539 nvme0n1 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.539 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.540 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.107 nvme0n1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.107 09:13:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.366 nvme0n1 00:19:30.366 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.366 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.366 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.366 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.366 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.366 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.624 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.624 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.625 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.885 nvme0n1 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.885 09:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.144 nvme0n1 00:19:31.144 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.144 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.144 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.144 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.144 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.404 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.664 nvme0n1 00:19:31.664 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.664 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:31.664 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:31.664 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.664 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.664 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.665 09:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.232 nvme0n1 00:19:32.232 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.232 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:32.232 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.232 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.232 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:32.232 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:32.491 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.492 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.058 nvme0n1 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.058 09:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.624 nvme0n1 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:33.624 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:33.882 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:33.882 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:33.882 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:33.882 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:33.882 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.882 09:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.449 nvme0n1 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:34.449 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.450 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.016 nvme0n1 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.016 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.017 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.276 nvme0n1 00:19:35.276 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.276 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.276 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.276 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.276 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.276 09:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:35.276 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.277 nvme0n1 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.277 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:35.536 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.537 nvme0n1 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.537 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.797 nvme0n1 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.797 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.057 nvme0n1 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.057 nvme0n1 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.057 09:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 nvme0n1 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.317 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.577 nvme0n1 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:36.577 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:36.578 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.836 nvme0n1 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.836 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.837 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.157 nvme0n1 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.157 09:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.431 nvme0n1 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:37.431 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:37.432 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:37.432 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:37.432 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:37.432 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.432 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.432 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.690 nvme0n1 00:19:37.690 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.690 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.690 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.690 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.690 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.690 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.691 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.949 nvme0n1 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:37.949 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.950 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.208 nvme0n1 00:19:38.208 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.208 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.208 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.208 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.208 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.208 09:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:38.208 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.209 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.467 nvme0n1 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:38.467 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:38.468 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:38.468 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:38.468 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:38.468 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.468 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.468 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.046 nvme0n1 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.046 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.047 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:39.048 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:39.049 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.049 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.049 09:13:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.314 nvme0n1 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:39.314 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.315 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.881 nvme0n1 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.881 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.882 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.141 nvme0n1 00:19:40.141 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.141 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.141 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.141 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.141 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.141 09:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.141 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.400 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.659 nvme0n1 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI0ZDY5M2ExNWE2ZWZkMzJlZjMwMTk4ODk2ZGI1ZWOA/W7v: 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzQ0NWI2MmQwNTZjMjU2MmJiNWY3OGQyNDUxZGYyN2ZjMTdkY2MzMGE5YzVkZWQ3ZDczYTgwY2NmM2FmNDE1Yq3xsco=: 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:40.659 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.660 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.660 09:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.227 nvme0n1 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.227 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:41.228 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:41.228 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:41.228 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:41.228 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:41.228 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:41.228 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:41.486 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:41.487 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.487 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.487 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.055 nvme0n1 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.055 09:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.623 nvme0n1 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NGRiMGZmMGM2NmNhMjE2NDM1N2U2ZmY0Yjc4MWYyYTEyMWEzNTE4YjAzNzAzYzA4O06IDQ==: 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjBmYmQ3ZWY3MzdjMzAyMWVhMjIwZThkYTMzMWUzZWVveZp8: 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.623 09:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.191 nvme0n1 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.191 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGM3ZTliOTZlMDQ3YzhkNmJhOTE1ODg1ZWFiZTNlMWYyNTc0OWM4NjU0MjQyMWNjMDEwOGUzZTE5NzljYzE3N5N/Va0=: 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.192 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.451 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 nvme0n1 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.020 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.020 2024/11/20 09:13:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:44.020 request: 00:19:44.020 { 00:19:44.020 "method": "bdev_nvme_attach_controller", 00:19:44.020 "params": { 00:19:44.020 "name": "nvme0", 00:19:44.020 "trtype": "tcp", 00:19:44.020 "traddr": "10.0.0.1", 00:19:44.021 "adrfam": "ipv4", 00:19:44.021 "trsvcid": "4420", 00:19:44.021 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:44.021 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:44.021 "prchk_reftag": false, 00:19:44.021 "prchk_guard": false, 00:19:44.021 "hdgst": false, 00:19:44.021 "ddgst": false, 00:19:44.021 "allow_unrecognized_csi": false 00:19:44.021 } 00:19:44.021 } 00:19:44.021 Got JSON-RPC error response 00:19:44.021 GoRPCClient: error on JSON-RPC call 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.021 2024/11/20 09:13:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:44.021 request: 00:19:44.021 { 00:19:44.021 "method": "bdev_nvme_attach_controller", 00:19:44.021 "params": { 00:19:44.021 "name": "nvme0", 00:19:44.021 "trtype": "tcp", 00:19:44.021 "traddr": "10.0.0.1", 00:19:44.021 "adrfam": "ipv4", 00:19:44.021 "trsvcid": "4420", 00:19:44.021 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:44.021 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:44.021 "prchk_reftag": false, 00:19:44.021 "prchk_guard": false, 00:19:44.021 "hdgst": false, 00:19:44.021 "ddgst": false, 00:19:44.021 "dhchap_key": "key2", 00:19:44.021 "allow_unrecognized_csi": false 00:19:44.021 } 00:19:44.021 } 00:19:44.021 Got JSON-RPC error response 00:19:44.021 GoRPCClient: error on JSON-RPC call 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.021 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:44.281 09:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.281 2024/11/20 09:13:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:44.281 request: 00:19:44.281 { 00:19:44.281 "method": "bdev_nvme_attach_controller", 00:19:44.281 "params": { 00:19:44.281 "name": "nvme0", 00:19:44.281 "trtype": "tcp", 00:19:44.281 "traddr": "10.0.0.1", 00:19:44.281 "adrfam": "ipv4", 00:19:44.281 "trsvcid": "4420", 00:19:44.281 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:19:44.281 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:19:44.281 "prchk_reftag": false, 00:19:44.281 "prchk_guard": false, 00:19:44.281 "hdgst": false, 00:19:44.281 "ddgst": false, 00:19:44.281 "dhchap_key": "key1", 00:19:44.281 "dhchap_ctrlr_key": "ckey2", 00:19:44.281 "allow_unrecognized_csi": false 00:19:44.281 } 00:19:44.281 } 00:19:44.281 Got JSON-RPC error response 00:19:44.281 GoRPCClient: error on JSON-RPC call 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.281 nvme0n1 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.281 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.540 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.541 2024/11/20 09:13:23 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:19:44.541 request: 00:19:44.541 { 00:19:44.541 "method": "bdev_nvme_set_keys", 00:19:44.541 "params": { 00:19:44.541 "name": "nvme0", 00:19:44.541 "dhchap_key": "key1", 00:19:44.541 "dhchap_ctrlr_key": "ckey2" 00:19:44.541 } 00:19:44.541 } 00:19:44.541 Got JSON-RPC error response 00:19:44.541 GoRPCClient: error on JSON-RPC call 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:19:44.541 09:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTEyM2ZhMmZiMjRhNDRlNWMyYTM2NzUyYTZiYWUyZDQzOGZiYWMyYWEyZGIzNTQ1UM1g/Q==: 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: ]] 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MmI3OTM2ZDVkMjdjMmRlM2ZlNmJjMDVmNWQwMDBhN2YyNGQwZmE1MzllYjcxMWY4qqw/GA==: 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo initiator0 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.478 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.737 nvme0n1 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OGQ3YzZmZWQ3ZWJmMWM2YjlkZDQ2YTBkM2I2YTQwZTGeKf+t: 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: ]] 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VhZGM5NjQwNTE1NzA0YjUzMmYwZWNlZTdmYWIwYjHE4oSW: 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.737 2024/11/20 09:13:24 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:19:45.737 request: 00:19:45.737 { 00:19:45.737 "method": "bdev_nvme_set_keys", 00:19:45.737 "params": { 00:19:45.737 "name": "nvme0", 00:19:45.737 "dhchap_key": "key2", 00:19:45.737 "dhchap_ctrlr_key": "ckey1" 00:19:45.737 } 00:19:45.737 } 00:19:45.737 Got JSON-RPC error response 00:19:45.737 GoRPCClient: error on JSON-RPC call 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:19:45.737 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:19:45.738 09:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:19:46.674 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:19:46.674 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.674 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.674 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.674 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:46.933 rmmod nvme_tcp 00:19:46.933 rmmod nvme_fabrics 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 91199 ']' 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 91199 00:19:46.933 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 91199 ']' 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 91199 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91199 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.934 killing process with pid 91199 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91199' 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 91199 00:19:46.934 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 91199 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:47.192 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:19:47.193 09:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # continue 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:19:47.193 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:19:47.452 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:48.055 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.055 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.055 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:48.321 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.P5K /tmp/spdk.key-null.RSS /tmp/spdk.key-sha256.KTw /tmp/spdk.key-sha384.yei /tmp/spdk.key-sha512.25w /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:19:48.321 09:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:48.581 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.581 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:48.581 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:48.581 00:19:48.581 real 0m38.064s 00:19:48.581 user 0m34.928s 00:19:48.581 sys 0m4.502s 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.581 ************************************ 00:19:48.581 END TEST nvmf_auth_host 00:19:48.581 ************************************ 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # [[ tcp == \t\c\p ]] 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.581 ************************************ 00:19:48.581 START TEST nvmf_digest 00:19:48.581 ************************************ 00:19:48.581 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:19:48.841 * Looking for test storage... 00:19:48.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.841 --rc genhtml_branch_coverage=1 00:19:48.841 --rc genhtml_function_coverage=1 00:19:48.841 --rc genhtml_legend=1 00:19:48.841 --rc geninfo_all_blocks=1 00:19:48.841 --rc geninfo_unexecuted_blocks=1 00:19:48.841 00:19:48.841 ' 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.841 --rc genhtml_branch_coverage=1 00:19:48.841 --rc genhtml_function_coverage=1 00:19:48.841 --rc genhtml_legend=1 00:19:48.841 --rc geninfo_all_blocks=1 00:19:48.841 --rc geninfo_unexecuted_blocks=1 00:19:48.841 00:19:48.841 ' 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.841 --rc genhtml_branch_coverage=1 00:19:48.841 --rc genhtml_function_coverage=1 00:19:48.841 --rc genhtml_legend=1 00:19:48.841 --rc geninfo_all_blocks=1 00:19:48.841 --rc geninfo_unexecuted_blocks=1 00:19:48.841 00:19:48.841 ' 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:48.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.841 --rc genhtml_branch_coverage=1 00:19:48.841 --rc genhtml_function_coverage=1 00:19:48.841 --rc genhtml_legend=1 00:19:48.841 --rc geninfo_all_blocks=1 00:19:48.841 --rc geninfo_unexecuted_blocks=1 00:19:48.841 00:19:48.841 ' 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:19:48.841 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:48.842 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@280 -- # nvmf_veth_init 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@223 -- # create_target_ns 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # create_main_bridge 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@105 -- # delete_main_bridge 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:48.842 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator0 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target0 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0 up 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target0_br 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target0 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:19:48.843 10.0.0.1 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:19:48.843 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:19:49.103 10.0.0.2 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator0 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target0_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up initiator1 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@151 -- # set_up target1 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1 up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # set_up target1_br 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns target1 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772163 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:19:49.103 10.0.0.3 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772164 00:19:49.103 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:19:49.104 10.0.0.4 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up initiator1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@129 -- # set_up target1_br 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 2 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:49.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:19:49.104 00:19:49.104 --- 10.0.0.1 ping statistics --- 00:19:49.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.104 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:49.104 09:13:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:49.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:19:49.104 00:19:49.104 --- 10.0.0.2 ping statistics --- 00:19:49.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.104 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.104 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:19:49.105 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:19:49.364 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:19:49.365 00:19:49.365 --- 10.0.0.3 ping statistics --- 00:19:49.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.365 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:19:49.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:49.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:19:49.365 00:19:49.365 --- 10.0.0.4 ping statistics --- 00:19:49.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.365 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # return 0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo initiator1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=initiator1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target0 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=target1 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:49.365 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:19:49.366 ************************************ 00:19:49.366 START TEST nvmf_digest_clean 00:19:49.366 ************************************ 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=93109 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 93109 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93109 ']' 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.366 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.366 [2024-11-20 09:13:28.208988] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:19:49.366 [2024-11-20 09:13:28.209105] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.625 [2024-11-20 09:13:28.358947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.625 [2024-11-20 09:13:28.416781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.625 [2024-11-20 09:13:28.416851] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.625 [2024-11-20 09:13:28.416876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.625 [2024-11-20 09:13:28.416884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.625 [2024-11-20 09:13:28.416891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.625 [2024-11-20 09:13:28.417292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.625 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.884 null0 00:19:49.884 [2024-11-20 09:13:28.643221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.884 [2024-11-20 09:13:28.667371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93146 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93146 /var/tmp/bperf.sock 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93146 ']' 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:49.884 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:49.885 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:49.885 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.885 09:13:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:49.885 [2024-11-20 09:13:28.738842] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:19:49.885 [2024-11-20 09:13:28.738948] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93146 ] 00:19:50.143 [2024-11-20 09:13:28.890884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.143 [2024-11-20 09:13:28.954984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.143 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.143 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:50.143 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:50.143 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:50.143 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:50.712 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:50.712 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:50.971 nvme0n1 00:19:50.971 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:50.971 09:13:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:50.971 Running I/O for 2 seconds... 00:19:53.285 18525.00 IOPS, 72.36 MiB/s [2024-11-20T09:13:32.204Z] 18594.50 IOPS, 72.63 MiB/s 00:19:53.285 Latency(us) 00:19:53.285 [2024-11-20T09:13:32.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.285 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:19:53.285 nvme0n1 : 2.00 18612.68 72.71 0.00 0.00 6871.03 3842.79 17277.67 00:19:53.285 [2024-11-20T09:13:32.204Z] =================================================================================================================== 00:19:53.286 [2024-11-20T09:13:32.205Z] Total : 18612.68 72.71 0.00 0.00 6871.03 3842.79 17277.67 00:19:53.286 { 00:19:53.286 "results": [ 00:19:53.286 { 00:19:53.286 "job": "nvme0n1", 00:19:53.286 "core_mask": "0x2", 00:19:53.286 "workload": "randread", 00:19:53.286 "status": "finished", 00:19:53.286 "queue_depth": 128, 00:19:53.286 "io_size": 4096, 00:19:53.286 "runtime": 2.004923, 00:19:53.286 "iops": 18612.68487617729, 00:19:53.286 "mibps": 72.70580029756754, 00:19:53.286 "io_failed": 0, 00:19:53.286 "io_timeout": 0, 00:19:53.286 "avg_latency_us": 6871.026204873723, 00:19:53.286 "min_latency_us": 3842.7927272727275, 00:19:53.286 "max_latency_us": 17277.672727272726 00:19:53.286 } 00:19:53.286 ], 00:19:53.286 "core_count": 1 00:19:53.286 } 00:19:53.286 09:13:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:53.286 09:13:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:53.286 09:13:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:53.286 09:13:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:53.286 | select(.opcode=="crc32c") 00:19:53.286 | "\(.module_name) \(.executed)"' 00:19:53.286 09:13:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93146 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93146 ']' 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93146 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.286 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93146 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:53.545 killing process with pid 93146 00:19:53.545 Received shutdown signal, test time was about 2.000000 seconds 00:19:53.545 00:19:53.545 Latency(us) 00:19:53.545 [2024-11-20T09:13:32.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.545 [2024-11-20T09:13:32.464Z] =================================================================================================================== 00:19:53.545 [2024-11-20T09:13:32.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93146' 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93146 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93146 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:53.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93221 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93221 /var/tmp/bperf.sock 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93221 ']' 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.545 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:53.804 [2024-11-20 09:13:32.478983] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:19:53.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:53.804 Zero copy mechanism will not be used. 00:19:53.804 [2024-11-20 09:13:32.479087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93221 ] 00:19:53.804 [2024-11-20 09:13:32.621051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.804 [2024-11-20 09:13:32.681783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.804 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:53.804 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:53.804 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:53.804 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:53.804 09:13:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:54.373 09:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.373 09:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:54.631 nvme0n1 00:19:54.631 09:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:54.631 09:13:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:54.890 I/O size of 131072 is greater than zero copy threshold (65536). 00:19:54.890 Zero copy mechanism will not be used. 00:19:54.890 Running I/O for 2 seconds... 00:19:56.764 7715.00 IOPS, 964.38 MiB/s [2024-11-20T09:13:35.683Z] 7766.00 IOPS, 970.75 MiB/s 00:19:56.764 Latency(us) 00:19:56.764 [2024-11-20T09:13:35.683Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.764 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:19:56.764 nvme0n1 : 2.00 7761.56 970.20 0.00 0.00 2057.58 603.23 7238.75 00:19:56.764 [2024-11-20T09:13:35.683Z] =================================================================================================================== 00:19:56.764 [2024-11-20T09:13:35.683Z] Total : 7761.56 970.20 0.00 0.00 2057.58 603.23 7238.75 00:19:56.764 { 00:19:56.764 "results": [ 00:19:56.764 { 00:19:56.764 "job": "nvme0n1", 00:19:56.764 "core_mask": "0x2", 00:19:56.764 "workload": "randread", 00:19:56.764 "status": "finished", 00:19:56.764 "queue_depth": 16, 00:19:56.764 "io_size": 131072, 00:19:56.764 "runtime": 2.003205, 00:19:56.764 "iops": 7761.562096739975, 00:19:56.764 "mibps": 970.1952620924968, 00:19:56.764 "io_failed": 0, 00:19:56.764 "io_timeout": 0, 00:19:56.764 "avg_latency_us": 2057.5842683069436, 00:19:56.764 "min_latency_us": 603.2290909090909, 00:19:56.764 "max_latency_us": 7238.749090909091 00:19:56.764 } 00:19:56.764 ], 00:19:56.764 "core_count": 1 00:19:56.764 } 00:19:56.764 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:19:56.764 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:19:56.764 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:19:56.764 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:19:56.764 | select(.opcode=="crc32c") 00:19:56.764 | "\(.module_name) \(.executed)"' 00:19:56.764 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93221 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93221 ']' 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93221 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93221 00:19:57.332 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:57.332 killing process with pid 93221 00:19:57.332 Received shutdown signal, test time was about 2.000000 seconds 00:19:57.332 00:19:57.332 Latency(us) 00:19:57.332 [2024-11-20T09:13:36.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.333 [2024-11-20T09:13:36.252Z] =================================================================================================================== 00:19:57.333 [2024-11-20T09:13:36.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.333 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:57.333 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93221' 00:19:57.333 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93221 00:19:57.333 09:13:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93221 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93298 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93298 /var/tmp/bperf.sock 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93298 ']' 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:19:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.333 09:13:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:19:57.333 [2024-11-20 09:13:36.242590] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:19:57.333 [2024-11-20 09:13:36.242718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93298 ] 00:19:57.614 [2024-11-20 09:13:36.384538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.614 [2024-11-20 09:13:36.434469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.550 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.550 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:19:58.550 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:19:58.550 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:19:58.551 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:19:58.809 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:58.809 09:13:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:19:59.377 nvme0n1 00:19:59.377 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:19:59.377 09:13:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:19:59.377 Running I/O for 2 seconds... 00:20:01.249 23135.00 IOPS, 90.37 MiB/s [2024-11-20T09:13:40.168Z] 23342.50 IOPS, 91.18 MiB/s 00:20:01.249 Latency(us) 00:20:01.249 [2024-11-20T09:13:40.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.249 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:01.249 nvme0n1 : 2.01 23362.06 91.26 0.00 0.00 5471.67 2710.81 12153.95 00:20:01.249 [2024-11-20T09:13:40.168Z] =================================================================================================================== 00:20:01.249 [2024-11-20T09:13:40.168Z] Total : 23362.06 91.26 0.00 0.00 5471.67 2710.81 12153.95 00:20:01.249 { 00:20:01.249 "results": [ 00:20:01.249 { 00:20:01.249 "job": "nvme0n1", 00:20:01.249 "core_mask": "0x2", 00:20:01.249 "workload": "randwrite", 00:20:01.249 "status": "finished", 00:20:01.249 "queue_depth": 128, 00:20:01.249 "io_size": 4096, 00:20:01.249 "runtime": 2.007785, 00:20:01.249 "iops": 23362.063169114223, 00:20:01.249 "mibps": 91.25805925435243, 00:20:01.249 "io_failed": 0, 00:20:01.249 "io_timeout": 0, 00:20:01.249 "avg_latency_us": 5471.6701698949155, 00:20:01.249 "min_latency_us": 2710.807272727273, 00:20:01.249 "max_latency_us": 12153.949090909091 00:20:01.249 } 00:20:01.249 ], 00:20:01.249 "core_count": 1 00:20:01.249 } 00:20:01.249 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:01.249 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:01.249 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:01.249 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:01.249 | select(.opcode=="crc32c") 00:20:01.249 | "\(.module_name) \(.executed)"' 00:20:01.249 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93298 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93298 ']' 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93298 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93298 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:01.816 killing process with pid 93298 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93298' 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93298 00:20:01.816 Received shutdown signal, test time was about 2.000000 seconds 00:20:01.816 00:20:01.816 Latency(us) 00:20:01.816 [2024-11-20T09:13:40.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.816 [2024-11-20T09:13:40.735Z] =================================================================================================================== 00:20:01.816 [2024-11-20T09:13:40.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93298 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93385 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93385 /var/tmp/bperf.sock 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93385 ']' 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:01.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:01.816 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:01.816 [2024-11-20 09:13:40.729304] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:01.816 [2024-11-20 09:13:40.729425] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93385 ] 00:20:01.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:01.816 Zero copy mechanism will not be used. 00:20:02.076 [2024-11-20 09:13:40.878742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.076 [2024-11-20 09:13:40.938191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.076 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.076 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:02.076 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:02.076 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:02.076 09:13:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:02.643 09:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.643 09:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:02.903 nvme0n1 00:20:02.903 09:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:02.903 09:13:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:02.903 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:02.903 Zero copy mechanism will not be used. 00:20:02.903 Running I/O for 2 seconds... 00:20:05.216 6397.00 IOPS, 799.62 MiB/s [2024-11-20T09:13:44.135Z] 6502.00 IOPS, 812.75 MiB/s 00:20:05.216 Latency(us) 00:20:05.216 [2024-11-20T09:13:44.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.216 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:05.216 nvme0n1 : 2.00 6499.44 812.43 0.00 0.00 2456.07 2010.76 8460.10 00:20:05.216 [2024-11-20T09:13:44.135Z] =================================================================================================================== 00:20:05.216 [2024-11-20T09:13:44.135Z] Total : 6499.44 812.43 0.00 0.00 2456.07 2010.76 8460.10 00:20:05.216 { 00:20:05.216 "results": [ 00:20:05.216 { 00:20:05.216 "job": "nvme0n1", 00:20:05.216 "core_mask": "0x2", 00:20:05.216 "workload": "randwrite", 00:20:05.216 "status": "finished", 00:20:05.216 "queue_depth": 16, 00:20:05.216 "io_size": 131072, 00:20:05.216 "runtime": 2.003865, 00:20:05.216 "iops": 6499.439832523648, 00:20:05.216 "mibps": 812.429979065456, 00:20:05.216 "io_failed": 0, 00:20:05.216 "io_timeout": 0, 00:20:05.216 "avg_latency_us": 2456.071547911548, 00:20:05.216 "min_latency_us": 2010.7636363636364, 00:20:05.216 "max_latency_us": 8460.101818181818 00:20:05.216 } 00:20:05.216 ], 00:20:05.216 "core_count": 1 00:20:05.216 } 00:20:05.216 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:05.216 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:05.216 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:05.216 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:05.216 09:13:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:05.216 | select(.opcode=="crc32c") 00:20:05.216 | "\(.module_name) \(.executed)"' 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93385 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93385 ']' 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93385 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.216 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93385 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:05.475 killing process with pid 93385 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93385' 00:20:05.475 Received shutdown signal, test time was about 2.000000 seconds 00:20:05.475 00:20:05.475 Latency(us) 00:20:05.475 [2024-11-20T09:13:44.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.475 [2024-11-20T09:13:44.394Z] =================================================================================================================== 00:20:05.475 [2024-11-20T09:13:44.394Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93385 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93385 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93109 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93109 ']' 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93109 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93109 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.475 killing process with pid 93109 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93109' 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93109 00:20:05.475 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93109 00:20:05.734 00:20:05.734 real 0m16.453s 00:20:05.734 user 0m32.165s 00:20:05.734 sys 0m4.302s 00:20:05.734 ************************************ 00:20:05.734 END TEST nvmf_digest_clean 00:20:05.734 ************************************ 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:05.734 ************************************ 00:20:05.734 START TEST nvmf_digest_error 00:20:05.734 ************************************ 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=93489 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 93489 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93489 ']' 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.734 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:05.993 [2024-11-20 09:13:44.701593] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:05.993 [2024-11-20 09:13:44.701686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.993 [2024-11-20 09:13:44.838900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.993 [2024-11-20 09:13:44.900079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.993 [2024-11-20 09:13:44.900122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.993 [2024-11-20 09:13:44.900148] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.993 [2024-11-20 09:13:44.900156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.993 [2024-11-20 09:13:44.900178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.993 [2024-11-20 09:13:44.900561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.253 [2024-11-20 09:13:44.985008] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.253 09:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.253 null0 00:20:06.253 [2024-11-20 09:13:45.099642] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.253 [2024-11-20 09:13:45.123764] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93521 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93521 /var/tmp/bperf.sock 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93521 ']' 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:06.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.253 09:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:06.513 [2024-11-20 09:13:45.193454] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:06.513 [2024-11-20 09:13:45.193566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93521 ] 00:20:06.513 [2024-11-20 09:13:45.336072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.513 [2024-11-20 09:13:45.398932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.450 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.450 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:07.450 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:07.450 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:07.709 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:07.709 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.709 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.709 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.709 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.709 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:07.968 nvme0n1 00:20:07.968 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:07.968 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.968 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:07.968 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.968 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:07.968 09:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:08.227 Running I/O for 2 seconds... 00:20:08.227 [2024-11-20 09:13:46.943067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:46.943111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:46.943126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:46.954858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:46.954896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:46.954909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:46.968719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:46.968768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:46.968817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:46.980963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:46.981012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:46.981024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:46.994547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:46.994596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:46.994608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.007910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.007959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.007971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.021218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.021267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.021279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.032492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.032541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.032553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.045899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.045985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.045998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.059376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.059425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.059436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.072526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.072575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.072587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.086702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.086751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.086763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.100343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.100395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.100407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.112482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.112531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.112544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.126412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.126462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.126474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.227 [2024-11-20 09:13:47.141847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.227 [2024-11-20 09:13:47.141896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.227 [2024-11-20 09:13:47.141909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.486 [2024-11-20 09:13:47.155709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.486 [2024-11-20 09:13:47.155759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.486 [2024-11-20 09:13:47.155792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.486 [2024-11-20 09:13:47.169413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.486 [2024-11-20 09:13:47.169462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.486 [2024-11-20 09:13:47.169490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.486 [2024-11-20 09:13:47.184140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.184175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.184188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.198006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.198042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.198054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.211941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.211990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.212002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.225326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.225377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.225389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.239122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.239188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.239200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.252526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.252576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.252596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.265752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.265812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.265825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.279030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.279078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.279090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.290903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.290951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.290963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.304361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.304411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.304424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.317695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.317743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.317755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.331624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.331671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.331683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.344919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.344966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.344978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.360027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.360090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.360103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.372730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.372790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.372803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.386354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.386433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.386460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.487 [2024-11-20 09:13:47.400660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.487 [2024-11-20 09:13:47.400708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.487 [2024-11-20 09:13:47.400719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.412310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.412361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.412373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.426867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.426915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.426928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.440869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.440929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.440941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.455110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.455160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.455187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.465777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.465835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.480549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.480598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.480609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.493999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.494033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.494044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.505291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.505339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.505350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.517498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.517547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.517558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.530425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.530474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.530486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.543316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.543365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.543376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.555399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.555447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.555459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.567313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.567361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.567372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.581347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.581396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.581407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.595847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.595903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.595915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.608517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.608565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.608577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.620844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.620891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.620903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.632598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.632646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.632658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.643818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.643865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.643876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:08.747 [2024-11-20 09:13:47.656864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:08.747 [2024-11-20 09:13:47.656910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.747 [2024-11-20 09:13:47.656922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.006 [2024-11-20 09:13:47.670882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.670929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.670941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.684754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.684816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.684829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.697607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.697654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.697666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.712610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.712658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.712670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.726446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.726494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.726522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.739678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.739726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.739738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.752610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.752657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.752669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.763651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.763699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.763711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.777090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.777138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.777149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.790671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.790721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.790733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.803786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.803858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.803886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.816236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.816284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.816295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.829657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.829705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.829716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.842853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.842917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.842929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.855536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.855587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.855600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.868342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.868388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.868400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.881055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.881102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.881114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.894291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.894324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.894336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.906375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.906408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.906419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.007 [2024-11-20 09:13:47.920211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.007 [2024-11-20 09:13:47.920274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.007 [2024-11-20 09:13:47.920286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 19155.00 IOPS, 74.82 MiB/s [2024-11-20T09:13:48.185Z] [2024-11-20 09:13:47.935030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:47.935079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:47.935090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:47.948350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:47.948398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:47.948411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:47.963843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:47.963887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:47.963901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:47.977662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:47.977710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:47.977722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:47.992093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:47.992141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:47.992152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.003647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.003695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.003707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.016881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.016928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.016941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.029415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.029464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.029475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.042992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.043038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.043049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.055708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.055756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.055768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.069221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.069269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.069281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.082517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.082564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.082576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.096967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.097014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.097026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.108182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.108229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.108241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.122544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.122593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.122604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.135800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.135846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.135858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.148656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.148705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.148716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.162073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.162108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.162120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.266 [2024-11-20 09:13:48.175861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.266 [2024-11-20 09:13:48.175909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.266 [2024-11-20 09:13:48.175921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.188101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.188149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.188176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.200694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.200742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.200754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.213801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.213833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.213844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.226889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.226936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.226947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.239654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.239702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.239714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.252364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.252412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.252423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.265270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.265317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.265329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.278596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.278643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.278655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.289805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.289851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.289863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.302494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.302542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.302553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.314709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.314742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.314780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.330365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.330401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.330414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.344882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.344917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.344929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.359498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.359549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.359561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.371481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.371531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.371544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.385711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.385747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.385771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.399634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.399684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.399696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.413033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.413081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.413094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.426353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.426388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.426400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.525 [2024-11-20 09:13:48.439812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.525 [2024-11-20 09:13:48.439845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.525 [2024-11-20 09:13:48.439858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.453470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.453505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.453517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.468220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.468270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.468283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.482351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.482386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.482398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.495892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.495940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.495952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.508973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.509022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.509034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.521439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.521486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.521498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.536450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.536514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.536526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.550495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.550576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.550589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.563883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.563930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.563942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.576288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.576335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.576347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.589755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.589812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.589825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.602914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.602960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.602972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.615825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.615873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.615885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.627421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.627472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.627483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.639681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.639729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.639741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.652599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.652648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.652659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.666309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.666342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.666353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.679677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.679726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.679737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:09.784 [2024-11-20 09:13:48.693747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:09.784 [2024-11-20 09:13:48.693804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:09.784 [2024-11-20 09:13:48.693817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.707857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.707905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.707917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.720679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.720728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.720739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.733296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.733344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.733355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.745707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.745755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.745766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.759452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.759486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:17839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.759515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.772696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.772744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.772756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.787781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.787837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.787850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.802092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.802128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.802140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.814343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.814407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.814419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.828388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.828437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.828448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.842359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.842391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.842402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.854808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.854866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.854878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.867281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.867328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.867339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.880339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.880387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.880398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.893053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.893101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.893112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.903093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.903141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.903153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 [2024-11-20 09:13:48.916678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2122540) 00:20:10.045 [2024-11-20 09:13:48.916725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.045 [2024-11-20 09:13:48.916737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:10.045 19176.00 IOPS, 74.91 MiB/s 00:20:10.045 Latency(us) 00:20:10.045 [2024-11-20T09:13:48.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.045 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:10.045 nvme0n1 : 2.00 19201.30 75.01 0.00 0.00 6659.11 3395.96 18945.86 00:20:10.045 [2024-11-20T09:13:48.964Z] =================================================================================================================== 00:20:10.045 [2024-11-20T09:13:48.964Z] Total : 19201.30 75.01 0.00 0.00 6659.11 3395.96 18945.86 00:20:10.045 { 00:20:10.045 "results": [ 00:20:10.045 { 00:20:10.045 "job": "nvme0n1", 00:20:10.045 "core_mask": "0x2", 00:20:10.045 "workload": "randread", 00:20:10.045 "status": "finished", 00:20:10.045 "queue_depth": 128, 00:20:10.045 "io_size": 4096, 00:20:10.045 "runtime": 2.004031, 00:20:10.045 "iops": 19201.29978029282, 00:20:10.045 "mibps": 75.00507726676882, 00:20:10.045 "io_failed": 0, 00:20:10.045 "io_timeout": 0, 00:20:10.045 "avg_latency_us": 6659.107223587223, 00:20:10.045 "min_latency_us": 3395.9563636363637, 00:20:10.045 "max_latency_us": 18945.861818181816 00:20:10.045 } 00:20:10.045 ], 00:20:10.045 "core_count": 1 00:20:10.045 } 00:20:10.045 09:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:10.045 09:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:10.045 09:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:10.045 09:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:10.045 | .driver_specific 00:20:10.045 | .nvme_error 00:20:10.045 | .status_code 00:20:10.045 | .command_transient_transport_error' 00:20:10.304 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:20:10.304 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93521 00:20:10.304 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93521 ']' 00:20:10.304 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93521 00:20:10.304 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:10.304 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93521 00:20:10.565 killing process with pid 93521 00:20:10.565 Received shutdown signal, test time was about 2.000000 seconds 00:20:10.565 00:20:10.565 Latency(us) 00:20:10.565 [2024-11-20T09:13:49.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.565 [2024-11-20T09:13:49.484Z] =================================================================================================================== 00:20:10.565 [2024-11-20T09:13:49.484Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93521' 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93521 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93521 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93606 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93606 /var/tmp/bperf.sock 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93606 ']' 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:10.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.565 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:10.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:10.829 Zero copy mechanism will not be used. 00:20:10.829 [2024-11-20 09:13:49.502636] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:10.830 [2024-11-20 09:13:49.502732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93606 ] 00:20:10.830 [2024-11-20 09:13:49.652039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.830 [2024-11-20 09:13:49.709539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.089 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.089 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:11.089 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:11.089 09:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:11.348 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:11.348 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.348 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:11.348 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.348 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:11.348 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:11.607 nvme0n1 00:20:11.607 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:11.607 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.607 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:11.607 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.607 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:11.607 09:13:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:11.867 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:11.867 Zero copy mechanism will not be used. 00:20:11.867 Running I/O for 2 seconds... 00:20:11.867 [2024-11-20 09:13:50.559961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.560046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.867 [2024-11-20 09:13:50.560060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.867 [2024-11-20 09:13:50.563758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.563815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.867 [2024-11-20 09:13:50.563828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.867 [2024-11-20 09:13:50.566829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.566872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.867 [2024-11-20 09:13:50.566885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.867 [2024-11-20 09:13:50.571308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.571357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.867 [2024-11-20 09:13:50.571385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.867 [2024-11-20 09:13:50.574740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.574798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.867 [2024-11-20 09:13:50.574811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.867 [2024-11-20 09:13:50.578487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.578534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.867 [2024-11-20 09:13:50.578546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.867 [2024-11-20 09:13:50.582375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.582437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.867 [2024-11-20 09:13:50.582449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.867 [2024-11-20 09:13:50.585897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.867 [2024-11-20 09:13:50.585967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.585996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.589448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.589499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.589511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.593397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.593443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.593455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.596776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.596833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.596846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.600921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.600983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.600995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.603856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.603903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.603915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.607689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.607737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.607750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.611987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.612034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.612046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.615310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.615358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.615370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.618059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.618107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.618119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.622514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.622575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.622587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.626474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.626522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.626534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.628984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.629031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.629042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.632233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.632282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.632294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.636194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.636245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.636257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.640610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.640659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.640672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.643856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.643909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.643921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.648646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.648697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.648710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.652031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.652081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.652094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.655967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.656033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.656046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.659761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.659840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.659853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.664435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.664524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.664543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.667999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.668048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.868 [2024-11-20 09:13:50.668077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.868 [2024-11-20 09:13:50.672238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.868 [2024-11-20 09:13:50.672287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.672299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.676126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.676175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.676187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.680493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.680542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.680555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.683707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.683757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.683782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.688071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.688121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.688134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.691748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.691805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.691817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.695826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.695874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.695886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.699853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.699901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.699913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.703361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.703411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.703424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.706731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.706792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.706805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.711193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.711243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.711257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.715132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.715168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.715180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.718475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.718523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.718535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.722989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.723041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.723068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.727933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.727970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.727983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.732702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.732779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.735533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.735580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.735592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.740736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.740813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.740827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.745485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.745519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.745548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.748434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.748484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.748505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.752094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.752158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.752171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.756335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.756385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.756398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.760465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.760515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.869 [2024-11-20 09:13:50.760529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.869 [2024-11-20 09:13:50.763963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.869 [2024-11-20 09:13:50.764002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.870 [2024-11-20 09:13:50.764031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.870 [2024-11-20 09:13:50.768008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.870 [2024-11-20 09:13:50.768258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.870 [2024-11-20 09:13:50.768276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:11.870 [2024-11-20 09:13:50.771873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.870 [2024-11-20 09:13:50.771905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.870 [2024-11-20 09:13:50.771933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:11.870 [2024-11-20 09:13:50.775284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.870 [2024-11-20 09:13:50.775321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.870 [2024-11-20 09:13:50.775350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:11.870 [2024-11-20 09:13:50.779024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:11.870 [2024-11-20 09:13:50.779063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:11.870 [2024-11-20 09:13:50.779077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.130 [2024-11-20 09:13:50.783250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.783305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.783318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.786802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.786865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.786887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.791120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.791173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.791202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.794975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.795013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.795043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.798679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.798716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.798746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.803101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.803139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.803168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.805700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.805735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.805763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.809204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.809370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.809404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.813669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.813873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.813907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.818579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.818618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.818647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.823287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.823324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.823353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.827423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.827459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.827488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.830583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.830619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.830647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.834892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.834942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.834972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.838159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.838197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.838211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.842124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.842163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.842193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.845703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.845875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.845907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.849583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.849621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.849649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.853835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.853871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.853899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.857194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.857231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.857259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.860535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.860588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.860601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.864649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.864704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.864732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.131 [2024-11-20 09:13:50.868075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.131 [2024-11-20 09:13:50.868113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.131 [2024-11-20 09:13:50.868141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.871561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.871613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.871641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.875544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.875597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.875625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.879371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.879424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.879452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.882987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.883039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.883067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.887054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.887106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.887135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.890717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.890792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.890806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.894177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.894217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.894230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.898354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.898408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.898452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.902149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.902187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.902216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.905662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.905715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.905743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.909138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.909174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.909203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.912472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.912508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.912522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.916218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.916272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.916301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.920400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.920452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.920480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.924289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.924341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.924370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.928209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.928263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.928291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.932610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.932665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.932694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.936488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.936540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.936568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.940173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.940225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.940253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.943705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.943783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.943814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.948439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.948492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.948522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.952003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.952054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.952082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.956223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.956275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.956304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.961332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.961384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.961413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.965881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.965956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.965971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.968958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.969009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.969037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.132 [2024-11-20 09:13:50.973117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.132 [2024-11-20 09:13:50.973168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.132 [2024-11-20 09:13:50.973197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:50.976820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:50.976872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:50.976900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:50.979920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:50.979958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:50.979985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:50.983911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:50.983947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:50.983974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:50.987697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:50.987749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:50.987789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:50.990890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:50.990956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:50.990984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:50.994073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:50.994112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:50.994125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:50.997866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:50.997918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:50.997984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.001336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.001387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.001415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.004842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.004875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.004904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.008459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.008509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.008537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.011830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.011881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.011910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.015927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.015980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.019540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.019593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.019620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.022602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.022656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.022684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.026550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.026603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.026632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.030911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.030951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.030964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.034624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.034677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.034707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.038532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.038582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.038610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.133 [2024-11-20 09:13:51.042444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.133 [2024-11-20 09:13:51.042481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.133 [2024-11-20 09:13:51.042494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.046673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.046741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.046765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.050432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.050484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.050528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.054359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.054411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.054453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.058222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.058260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.058288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.061417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.061467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.061494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.065994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.066032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.066045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.069186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.069241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.069268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.072452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.072503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.072530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.076228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.076280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.076307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.079567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.079618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.079646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.083553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.083604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.394 [2024-11-20 09:13:51.083632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.394 [2024-11-20 09:13:51.087236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.394 [2024-11-20 09:13:51.087288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.087316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.090933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.090990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.091018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.094865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.094916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.094944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.098880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.098933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.098961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.102754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.102816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.102857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.105783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.105846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.105875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.110334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.110388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.110400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.113728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.113789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.113819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.117294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.117357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.117385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.121038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.121091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.121119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.125125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.125176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.125204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.128317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.128354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.128382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.132660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.132712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.132740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.136737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.136798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.136827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.139887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.139936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.139964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.142923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.142957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.142985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.146490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.146541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.146569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.149453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.149503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.149530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.152926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.152975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.153003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.156139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.156204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.156232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.159446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.159497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.159525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.163070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.163122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.163150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.166132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.166170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.166198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.169501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.169551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.169579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.173170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.173269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.173282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.176728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.176773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.176803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.180333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.395 [2024-11-20 09:13:51.180383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.395 [2024-11-20 09:13:51.180411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.395 [2024-11-20 09:13:51.184277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.184330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.184374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.187574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.187625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.187653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.191461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.191529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.191557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.195857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.195908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.195936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.199223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.199275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.199303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.203319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.203385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.203413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.206659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.206711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.206739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.211020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.211071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.211099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.214345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.214398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.214426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.219258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.219312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.219325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.222472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.222524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.222552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.226695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.226747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.226788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.230530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.230581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.230610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.233735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.233781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.233810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.237533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.237584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.237612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.242234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.242287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.242299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.245439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.245489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.245517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.249541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.249592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.249620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.253229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.253279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.253306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.256284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.256335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.256363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.259449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.259517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.259545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.262734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.262799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.262829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.266803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.266880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.266908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.270448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.270498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.270525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.273836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.273867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.273895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.277409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.277459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.277487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.280281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.396 [2024-11-20 09:13:51.280332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.396 [2024-11-20 09:13:51.280359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.396 [2024-11-20 09:13:51.284430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.397 [2024-11-20 09:13:51.284481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.397 [2024-11-20 09:13:51.284511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.397 [2024-11-20 09:13:51.289099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.397 [2024-11-20 09:13:51.289151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.397 [2024-11-20 09:13:51.289179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.397 [2024-11-20 09:13:51.293205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.397 [2024-11-20 09:13:51.293255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.397 [2024-11-20 09:13:51.293283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.397 [2024-11-20 09:13:51.296488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.397 [2024-11-20 09:13:51.296539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.397 [2024-11-20 09:13:51.296567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.397 [2024-11-20 09:13:51.300587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.397 [2024-11-20 09:13:51.300639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.397 [2024-11-20 09:13:51.300667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.397 [2024-11-20 09:13:51.304832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.397 [2024-11-20 09:13:51.304882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.397 [2024-11-20 09:13:51.304909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.397 [2024-11-20 09:13:51.307748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.397 [2024-11-20 09:13:51.307844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.397 [2024-11-20 09:13:51.307857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.312252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.312304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.312331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.316028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.316080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.316122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.319778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.319823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.319836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.323208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.323260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.323288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.326443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.326508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.326536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.330642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.330694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.330721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.334977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.335028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.335056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.339526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.339578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.339607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.342661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.342711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.342738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.346467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.346534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.346562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.659 [2024-11-20 09:13:51.350056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.659 [2024-11-20 09:13:51.350094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.659 [2024-11-20 09:13:51.350122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.353001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.353052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.353079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.357191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.357242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.357270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.361690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.361741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.361780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.365861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.365910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.365973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.368731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.368811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.368824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.372848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.372909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.372938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.377078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.377144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.377156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.380405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.380455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.380482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.383723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.383798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.383811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.387388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.387439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.387466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.390671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.390723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.390751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.394092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.394129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.394157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.397630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.397681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.397708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.400944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.400979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.401007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.404380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.404431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.404459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.408345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.408395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.408423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.411564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.411616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.411644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.416273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.416324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.416352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.419642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.419692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.419720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.423375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.423427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.423455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.427684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.427736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.427763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.430898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.430933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.430961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.434630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.434681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.434709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.439702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.439783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.660 [2024-11-20 09:13:51.439798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.660 [2024-11-20 09:13:51.444250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.660 [2024-11-20 09:13:51.444303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.444346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.447068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.447104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.447132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.450784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.450832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.450860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.454426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.454477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.454510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.457484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.457539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.457553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.461279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.461330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.461357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.464572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.464623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.464651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.468296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.468347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.468375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.471869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.471905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.471933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.475649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.475704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.475733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.479403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.479454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.479481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.483386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.483454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.483482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.487967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.488034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.488047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.491370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.491438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.491467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.494934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.494984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.495012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.497709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.497784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.497797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.501145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.501195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.501223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.504703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.504780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.504793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.507857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.507907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.507934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.511709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.511800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.511813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.515858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.515909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.515937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.518976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.519026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.519054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.522520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.522573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.522601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.527111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.527150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.527163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.661 [2024-11-20 09:13:51.530288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.661 [2024-11-20 09:13:51.530343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.661 [2024-11-20 09:13:51.530355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.534492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.534530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.534543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.539750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.539799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.539813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.544740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.544787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.544801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.547706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.547771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.547785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.662 8141.00 IOPS, 1017.62 MiB/s [2024-11-20T09:13:51.581Z] [2024-11-20 09:13:51.554040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.554079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.554092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.558608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.558658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.558685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.562798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.562861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.562889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.565788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.565880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.565892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.662 [2024-11-20 09:13:51.570866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.662 [2024-11-20 09:13:51.570918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.662 [2024-11-20 09:13:51.570931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.575269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.923 [2024-11-20 09:13:51.575319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.923 [2024-11-20 09:13:51.575347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.578452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.923 [2024-11-20 09:13:51.578519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.923 [2024-11-20 09:13:51.578546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.582235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.923 [2024-11-20 09:13:51.582306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.923 [2024-11-20 09:13:51.582319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.585823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.923 [2024-11-20 09:13:51.585856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.923 [2024-11-20 09:13:51.585884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.589046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.923 [2024-11-20 09:13:51.589097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.923 [2024-11-20 09:13:51.589125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.592807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.923 [2024-11-20 09:13:51.592872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.923 [2024-11-20 09:13:51.592901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.596575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.923 [2024-11-20 09:13:51.596628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.923 [2024-11-20 09:13:51.596656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.923 [2024-11-20 09:13:51.600049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.600102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.600130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.603641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.603692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.603720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.607070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.607105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.607133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.610990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.611026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.611054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.614101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.614142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.614155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.617721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.617797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.617811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.621241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.621292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.621320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.624722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.624797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.624810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.628713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.628788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.628801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.632033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.632085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.632113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.635865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.635916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.635943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.639274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.639325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.639353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.643148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.643199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.643234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.646657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.646707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.646736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.650316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.650369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.650397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.654005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.654042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.654070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.657340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.657389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.657416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.661175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.661211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.661240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.665037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.665088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.665115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.668264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.668317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.668345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.671640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.671691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.671719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.675679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.675730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.675758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.678800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.678842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.678870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.924 [2024-11-20 09:13:51.683030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.924 [2024-11-20 09:13:51.683083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.924 [2024-11-20 09:13:51.683111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.687122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.687174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.687202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.690602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.690654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.690681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.694338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.694399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.694442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.697787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.697836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.697863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.701862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.701911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.701964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.705074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.705109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.705136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.708951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.708987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.709027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.713007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.713044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.713071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.715928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.715964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.715991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.720205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.720256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.720283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.723104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.723156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.723184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.727017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.727068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.727095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.730405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.730470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.730498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.734162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.734200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.734229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.737745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.737808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.737836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.741827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.741863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.741891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.745451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.745549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.745562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.749496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.749556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.749584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.753254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.753306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.753349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.757004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.757055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.757082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.760424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.760476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.760524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.764496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.764550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.764587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.768336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.768387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.768415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.772437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.772505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.772532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.776125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.776176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.776205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.780351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.780418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.925 [2024-11-20 09:13:51.780446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.925 [2024-11-20 09:13:51.784031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.925 [2024-11-20 09:13:51.784084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.784113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.787783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.787860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.787873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.791637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.791690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.791718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.795393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.795445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.795475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.800002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.800040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.800053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.803408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.803484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.803497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.807696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.807750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.807790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.811868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.811933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.811946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.815296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.815349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.815377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.819432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.819503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.819516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.823297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.823350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.823378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.827276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.827328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.827356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.831557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.831611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.831640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.926 [2024-11-20 09:13:51.834737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:12.926 [2024-11-20 09:13:51.834785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:12.926 [2024-11-20 09:13:51.834799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.840094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.840133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.840146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.844973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.845012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.845025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.848820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.848890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.848922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.851774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.851840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.851854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.856194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.856251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.856280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.860567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.860620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.860649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.863604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.863655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.863683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.867741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.867805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.867834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.872606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.872659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.872687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.876261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.876314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.185 [2024-11-20 09:13:51.876342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.185 [2024-11-20 09:13:51.879578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.185 [2024-11-20 09:13:51.879630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.879659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.883306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.883358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.883386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.887156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.887224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.887252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.891570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.891626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.891639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.894636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.894689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.894717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.899377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.899431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.899444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.904318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.904372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.904401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.908465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.908505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.908518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.911503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.911543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.911572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.916193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.916233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.916247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.920908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.920977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.921005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.925479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.925521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.925534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.928304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.928356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.928385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.933484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.933530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.933544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.937872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.937912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.937925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.940818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.940854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.940867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.944927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.944967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.944980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.948605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.948644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.948658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.952874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.952915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.952929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.956301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.956340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.956354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.960255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.960307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.960336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.964430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.964516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.964529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.968000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.968055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.968068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.972521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.972573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.972602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.976842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.976895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.976908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.980160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.980212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.980240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.984767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.984818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.186 [2024-11-20 09:13:51.984832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.186 [2024-11-20 09:13:51.989457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.186 [2024-11-20 09:13:51.989509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:51.989538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:51.992985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:51.993041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:51.993054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:51.997888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:51.997967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:51.997982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.002759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.002826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.002856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.007522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.007576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.007590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.010387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.010454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.010483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.014874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.014912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.014942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.018258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.018298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.018311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.022633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.022695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.022708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.025649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.025689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.025702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.029383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.029437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.029451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.034548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.034633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.034663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.038013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.038053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.038066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.041907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.041970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.041984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.046450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.046505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.046518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.049266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.049320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.049334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.053407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.053446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.053459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.056945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.056984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.056997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.061170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.061225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.061238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.064428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.064466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.064494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.068417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.068486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.068515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.072251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.072305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.072335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.075970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.076009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.076023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.079451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.079504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.079517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.084276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.084329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.084359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.089395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.089447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.089476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.093692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.187 [2024-11-20 09:13:52.093732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.187 [2024-11-20 09:13:52.093745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.187 [2024-11-20 09:13:52.096681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.188 [2024-11-20 09:13:52.096732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.188 [2024-11-20 09:13:52.096761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.449 [2024-11-20 09:13:52.101475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.449 [2024-11-20 09:13:52.101546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.449 [2024-11-20 09:13:52.101559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.449 [2024-11-20 09:13:52.105256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.105310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.105339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.109105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.109158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.109187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.113217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.113272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.113285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.117247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.117286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.117298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.121195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.121248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.121292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.125424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.125493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.125507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.130094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.130134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.130147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.133844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.133909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.133965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.137432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.137483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.137512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.141737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.141832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.141861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.145896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.145968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.145981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.149290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.149326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.149339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.153298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.153352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.153381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.157449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.157488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.157516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.160711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.160793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.160807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.164826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.164876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.164905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.168301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.168368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.168397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.172635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.172688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.172733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.176100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.176135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.176164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.179992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.180029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.183229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.183280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.183308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.186307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.186359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.186388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.189908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.189991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.190004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.194084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.194123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.194137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.196909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.196945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.196957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.201572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.201624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.201653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.450 [2024-11-20 09:13:52.205862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.450 [2024-11-20 09:13:52.205964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.450 [2024-11-20 09:13:52.205979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.208784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.208832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.208861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.213298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.213350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.213379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.216681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.216732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.216761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.221300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.221350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.221379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.224553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.224616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.224645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.228808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.228853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.228882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.232479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.232531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.232559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.236422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.236472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.236500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.239389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.239441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.239468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.243430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.243482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.243510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.247018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.247069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.247097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.250559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.250609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.250638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.253981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.254018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.254047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.257103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.257154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.257182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.261040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.261092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.261120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.265053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.265104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.265132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.268445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.268496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.268524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.272325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.272377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.272406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.276290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.276341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.276369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.280237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.280289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.451 [2024-11-20 09:13:52.280318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.451 [2024-11-20 09:13:52.283270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.451 [2024-11-20 09:13:52.283321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.283349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.287373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.287427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.287455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.290960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.291034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.291046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.294162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.294201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.294214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.297675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.297727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.297756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.301571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.301624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.301653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.304984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.305020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.305048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.308575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.308626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.308654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.312290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.312341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.312370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.315748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.315808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.315837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.319233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.319284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.319312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.323218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.323270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.323298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.326654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.326706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.326734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.329447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.329513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.329541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.333791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.333867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.333896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.338347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.338399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.338427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.341699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.341751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.341792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.345328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.345378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.345407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.349526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.349580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.349608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.352605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.352655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.352683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.356499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.356550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.356578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.452 [2024-11-20 09:13:52.360459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.452 [2024-11-20 09:13:52.360511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.452 [2024-11-20 09:13:52.360539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.364591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.364646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.364659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.368758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.368821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.368850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.372124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.372163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.372176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.376112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.376166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.376195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.380181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.380235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.380264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.384279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.384331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.384360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.387454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.387507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.387536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.391310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.391363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.391392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.394871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.394923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.394952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.398384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.398450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.398478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.401846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.401898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.401936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.406066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.406105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.406119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.409938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.409993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.410006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.413235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.413271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.413299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.417277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.417314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.417342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.420801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.420836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.420865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.424565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.424602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.424631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.428895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.428931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.428960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.432156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.432210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.432238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.436104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.436158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.436187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.439652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.439706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.714 [2024-11-20 09:13:52.439734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.714 [2024-11-20 09:13:52.443936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.714 [2024-11-20 09:13:52.443989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.444018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.448157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.448210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.448238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.451241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.451295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.451323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.455883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.455935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.455963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.460461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.460515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.460544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.463701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.463798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.463813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.467697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.467749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.467791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.470838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.470889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.470916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.475083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.475134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.475149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.479014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.479053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.479066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.482385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.482454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.482467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.486551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.486590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.486603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.489727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.489777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.489792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.493706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.493784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.493797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.497983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.498022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.498036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.500780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.500841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.500884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.505721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.505785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.505799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.509243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.509296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.509324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.512364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.512415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.512443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.516701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.516784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.516799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.520167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.520251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.520279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.524596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.524649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.524693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.529315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.529369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.529382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.533936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.533974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.533987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.536705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.536782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.536796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.541111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.541161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.541189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.544538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.715 [2024-11-20 09:13:52.544590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.715 [2024-11-20 09:13:52.544619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:13.715 [2024-11-20 09:13:52.548898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b880) 00:20:13.716 [2024-11-20 09:13:52.548950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:13.716 [2024-11-20 09:13:52.548979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:13.716 8098.50 IOPS, 1012.31 MiB/s 00:20:13.716 Latency(us) 00:20:13.716 [2024-11-20T09:13:52.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.716 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:13.716 nvme0n1 : 2.00 8095.79 1011.97 0.00 0.00 1972.55 610.68 12451.84 00:20:13.716 [2024-11-20T09:13:52.635Z] =================================================================================================================== 00:20:13.716 [2024-11-20T09:13:52.635Z] Total : 8095.79 1011.97 0.00 0.00 1972.55 610.68 12451.84 00:20:13.716 { 00:20:13.716 "results": [ 00:20:13.716 { 00:20:13.716 "job": "nvme0n1", 00:20:13.716 "core_mask": "0x2", 00:20:13.716 "workload": "randread", 00:20:13.716 "status": "finished", 00:20:13.716 "queue_depth": 16, 00:20:13.716 "io_size": 131072, 00:20:13.716 "runtime": 2.002646, 00:20:13.716 "iops": 8095.7892707947385, 00:20:13.716 "mibps": 1011.9736588493423, 00:20:13.716 "io_failed": 0, 00:20:13.716 "io_timeout": 0, 00:20:13.716 "avg_latency_us": 1972.5473865528786, 00:20:13.716 "min_latency_us": 610.6763636363636, 00:20:13.716 "max_latency_us": 12451.84 00:20:13.716 } 00:20:13.716 ], 00:20:13.716 "core_count": 1 00:20:13.716 } 00:20:13.716 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:13.716 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:13.716 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:13.716 | .driver_specific 00:20:13.716 | .nvme_error 00:20:13.716 | .status_code 00:20:13.716 | .command_transient_transport_error' 00:20:13.716 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:13.975 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 523 > 0 )) 00:20:13.975 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93606 00:20:13.975 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93606 ']' 00:20:13.975 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93606 00:20:13.975 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:14.234 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:14.234 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93606 00:20:14.234 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:14.234 killing process with pid 93606 00:20:14.234 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:14.234 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93606' 00:20:14.234 Received shutdown signal, test time was about 2.000000 seconds 00:20:14.234 00:20:14.234 Latency(us) 00:20:14.234 [2024-11-20T09:13:53.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.234 [2024-11-20T09:13:53.153Z] =================================================================================================================== 00:20:14.234 [2024-11-20T09:13:53.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.234 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93606 00:20:14.234 09:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93606 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93683 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93683 /var/tmp/bperf.sock 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93683 ']' 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.234 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:14.493 [2024-11-20 09:13:53.178059] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:14.493 [2024-11-20 09:13:53.178154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93683 ] 00:20:14.493 [2024-11-20 09:13:53.321593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.493 [2024-11-20 09:13:53.379603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.751 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.751 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:14.751 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:14.751 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:15.010 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:15.010 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.010 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:15.010 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.010 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.010 09:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:15.269 nvme0n1 00:20:15.269 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:15.269 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.269 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:15.269 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.269 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:15.269 09:13:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:15.528 Running I/O for 2 seconds... 00:20:15.528 [2024-11-20 09:13:54.240257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166dece0 00:20:15.528 [2024-11-20 09:13:54.241535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.241599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.250958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f7970 00:20:15.528 [2024-11-20 09:13:54.252076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:18689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.252129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.261803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f6890 00:20:15.528 [2024-11-20 09:13:54.262924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.262972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.274916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ee190 00:20:15.528 [2024-11-20 09:13:54.276552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.276603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.283040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e1710 00:20:15.528 [2024-11-20 09:13:54.283918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.283985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.297495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e84c0 00:20:15.528 [2024-11-20 09:13:54.299092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.299158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.307916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f9b30 00:20:15.528 [2024-11-20 09:13:54.309005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.309053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.318634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f46d0 00:20:15.528 [2024-11-20 09:13:54.319756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.319827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.331975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ebfd0 00:20:15.528 [2024-11-20 09:13:54.333702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.333750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.339966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eaef0 00:20:15.528 [2024-11-20 09:13:54.340840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.340909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.352956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ea680 00:20:15.528 [2024-11-20 09:13:54.354526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.354576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.363351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fbcf0 00:20:15.528 [2024-11-20 09:13:54.365111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.365177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.375708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f3e60 00:20:15.528 [2024-11-20 09:13:54.376957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.376992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.390696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e84c0 00:20:15.528 [2024-11-20 09:13:54.392514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.392564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.399135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e9e10 00:20:15.528 [2024-11-20 09:13:54.400091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.400134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.413221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f2510 00:20:15.528 [2024-11-20 09:13:54.414800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.414850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.424078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed920 00:20:15.528 [2024-11-20 09:13:54.425357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.425407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:15.528 [2024-11-20 09:13:54.435363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f6020 00:20:15.528 [2024-11-20 09:13:54.436725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.528 [2024-11-20 09:13:54.436801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:15.787 [2024-11-20 09:13:54.449755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6300 00:20:15.787 [2024-11-20 09:13:54.451640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.787 [2024-11-20 09:13:54.451689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:15.787 [2024-11-20 09:13:54.458050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fcdd0 00:20:15.787 [2024-11-20 09:13:54.459051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.787 [2024-11-20 09:13:54.459098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:15.787 [2024-11-20 09:13:54.471649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166dfdc0 00:20:15.787 [2024-11-20 09:13:54.473288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.787 [2024-11-20 09:13:54.473335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:15.787 [2024-11-20 09:13:54.482453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166efae0 00:20:15.787 [2024-11-20 09:13:54.483813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.787 [2024-11-20 09:13:54.483854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:15.787 [2024-11-20 09:13:54.493521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f81e0 00:20:15.787 [2024-11-20 09:13:54.494891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.787 [2024-11-20 09:13:54.494938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:15.787 [2024-11-20 09:13:54.504359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e5a90 00:20:15.787 [2024-11-20 09:13:54.505445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.787 [2024-11-20 09:13:54.505510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:15.787 [2024-11-20 09:13:54.515659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e49b0 00:20:15.788 [2024-11-20 09:13:54.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.516783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.529065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e1f80 00:20:15.788 [2024-11-20 09:13:54.530816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.530871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.537290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eea00 00:20:15.788 [2024-11-20 09:13:54.538128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.538177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.551162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fa3a0 00:20:15.788 [2024-11-20 09:13:54.552577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.561587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e7c50 00:20:15.788 [2024-11-20 09:13:54.562804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.562864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.572332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166df118 00:20:15.788 [2024-11-20 09:13:54.573461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.573523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.585479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eb760 00:20:15.788 [2024-11-20 09:13:54.587328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.587375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.593730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ec840 00:20:15.788 [2024-11-20 09:13:54.594689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.594738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.608387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f35f0 00:20:15.788 [2024-11-20 09:13:54.609975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.610011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.619304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e9e10 00:20:15.788 [2024-11-20 09:13:54.620588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.620638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.630577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ddc00 00:20:15.788 [2024-11-20 09:13:54.631803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.631839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.643906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed920 00:20:15.788 [2024-11-20 09:13:54.645594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.645641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.651831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e3060 00:20:15.788 [2024-11-20 09:13:54.652757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.652813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.664568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e38d0 00:20:15.788 [2024-11-20 09:13:54.666248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.666342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.675945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fcdd0 00:20:15.788 [2024-11-20 09:13:54.677343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.677408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.687097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f92c0 00:20:15.788 [2024-11-20 09:13:54.688340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.688386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:15.788 [2024-11-20 09:13:54.700116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166efae0 00:20:15.788 [2024-11-20 09:13:54.702149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:15.788 [2024-11-20 09:13:54.702185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.708939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e0ea0 00:20:16.048 [2024-11-20 09:13:54.710030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.710064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.722512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f3a28 00:20:16.048 [2024-11-20 09:13:54.724026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.724074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.732841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166dece0 00:20:16.048 [2024-11-20 09:13:54.734219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.734255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.743208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f6890 00:20:16.048 [2024-11-20 09:13:54.744352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.744399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.753324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e9e10 00:20:16.048 [2024-11-20 09:13:54.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.754505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.764336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e01f8 00:20:16.048 [2024-11-20 09:13:54.765443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.765506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.778039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f1ca0 00:20:16.048 [2024-11-20 09:13:54.779798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.779840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.786847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f4f40 00:20:16.048 [2024-11-20 09:13:54.787657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.787690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.800806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fcdd0 00:20:16.048 [2024-11-20 09:13:54.802352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.802401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.811875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6738 00:20:16.048 [2024-11-20 09:13:54.813066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.822618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eb760 00:20:16.048 [2024-11-20 09:13:54.823720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.823787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.835762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166df988 00:20:16.048 [2024-11-20 09:13:54.837436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.837484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.843750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eaef0 00:20:16.048 [2024-11-20 09:13:54.844547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.844578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.856826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e49b0 00:20:16.048 [2024-11-20 09:13:54.858359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.858408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.867400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e88f8 00:20:16.048 [2024-11-20 09:13:54.868679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.868730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.878581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e49b0 00:20:16.048 [2024-11-20 09:13:54.879790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.879879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.892665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eaef0 00:20:16.048 [2024-11-20 09:13:54.894524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.894573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.900611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166df988 00:20:16.048 [2024-11-20 09:13:54.901532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.901593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.914028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eb760 00:20:16.048 [2024-11-20 09:13:54.915585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.915634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.924292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e23b8 00:20:16.048 [2024-11-20 09:13:54.925502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.925550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.935878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fcdd0 00:20:16.048 [2024-11-20 09:13:54.937210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.937259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.948914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f4f40 00:20:16.048 [2024-11-20 09:13:54.950765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.950820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:16.048 [2024-11-20 09:13:54.956692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f1ca0 00:20:16.048 [2024-11-20 09:13:54.957685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.048 [2024-11-20 09:13:54.957748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:54.971040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e9e10 00:20:16.307 [2024-11-20 09:13:54.972537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:54.972585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:54.981217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e01f8 00:20:16.307 [2024-11-20 09:13:54.982552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:54.982600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:54.991693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f9f68 00:20:16.307 [2024-11-20 09:13:54.993020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:54.993052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.001637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f4f40 00:20:16.307 [2024-11-20 09:13:55.002757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.002833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.012278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166efae0 00:20:16.307 [2024-11-20 09:13:55.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.013357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.025087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6738 00:20:16.307 [2024-11-20 09:13:55.026786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.026825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.032778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e1b48 00:20:16.307 [2024-11-20 09:13:55.033525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.033557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.045426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f7da8 00:20:16.307 [2024-11-20 09:13:55.046860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.046934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.055354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eaef0 00:20:16.307 [2024-11-20 09:13:55.056497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.056544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.065836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed920 00:20:16.307 [2024-11-20 09:13:55.066960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.067007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.078450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e88f8 00:20:16.307 [2024-11-20 09:13:55.080259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.080306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.086268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e9168 00:20:16.307 [2024-11-20 09:13:55.087130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.087161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.098857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed0b0 00:20:16.307 [2024-11-20 09:13:55.100297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.100344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.108808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166df988 00:20:16.307 [2024-11-20 09:13:55.110064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.307 [2024-11-20 09:13:55.110097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:16.307 [2024-11-20 09:13:55.120038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f7538 00:20:16.307 [2024-11-20 09:13:55.121347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.121396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.134687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e23b8 00:20:16.308 [2024-11-20 09:13:55.136550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.136593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.142956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6fa8 00:20:16.308 [2024-11-20 09:13:55.143887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.143921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.154671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eee38 00:20:16.308 [2024-11-20 09:13:55.155662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.155725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.165738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f7538 00:20:16.308 [2024-11-20 09:13:55.166590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.166623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.179216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6738 00:20:16.308 [2024-11-20 09:13:55.180763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.180839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.190620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed920 00:20:16.308 [2024-11-20 09:13:55.192398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.192460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.201399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f46d0 00:20:16.308 [2024-11-20 09:13:55.202816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.202893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.211590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ef270 00:20:16.308 [2024-11-20 09:13:55.212839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:25249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.308 [2024-11-20 09:13:55.212895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:16.308 [2024-11-20 09:13:55.222289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eb760 00:20:16.567 [2024-11-20 09:13:55.224926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.224979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:16.567 22338.00 IOPS, 87.26 MiB/s [2024-11-20T09:13:55.486Z] [2024-11-20 09:13:55.237318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166edd58 00:20:16.567 [2024-11-20 09:13:55.238958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.238993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.247874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f0ff8 00:20:16.567 [2024-11-20 09:13:55.249120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.249169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.258664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166df550 00:20:16.567 [2024-11-20 09:13:55.259778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.259816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.269444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e3d08 00:20:16.567 [2024-11-20 09:13:55.270634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.270681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.280417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e99d8 00:20:16.567 [2024-11-20 09:13:55.281518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.281565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.291655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166df550 00:20:16.567 [2024-11-20 09:13:55.292929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.292976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.304067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fda78 00:20:16.567 [2024-11-20 09:13:55.305516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.305566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.315158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f2d80 00:20:16.567 [2024-11-20 09:13:55.316354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.316403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.326494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f6458 00:20:16.567 [2024-11-20 09:13:55.327724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.327798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.339705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e4de8 00:20:16.567 [2024-11-20 09:13:55.341414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.341462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.347646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e5658 00:20:16.567 [2024-11-20 09:13:55.348576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.348619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.361430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed0b0 00:20:16.567 [2024-11-20 09:13:55.363063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.363110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.372288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ee5c8 00:20:16.567 [2024-11-20 09:13:55.373482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.373529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:16.567 [2024-11-20 09:13:55.383468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166feb58 00:20:16.567 [2024-11-20 09:13:55.384960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.567 [2024-11-20 09:13:55.385007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.393784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e1f80 00:20:16.568 [2024-11-20 09:13:55.395121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.395170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.404596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6300 00:20:16.568 [2024-11-20 09:13:55.405835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.405881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.417727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ebfd0 00:20:16.568 [2024-11-20 09:13:55.419628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.419676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.428604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ff3c8 00:20:16.568 [2024-11-20 09:13:55.430515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.430561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.437167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fa3a0 00:20:16.568 [2024-11-20 09:13:55.438198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.438234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.450684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fda78 00:20:16.568 [2024-11-20 09:13:55.452188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.452236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.460494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eaab8 00:20:16.568 [2024-11-20 09:13:55.461840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.461883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.471168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fd640 00:20:16.568 [2024-11-20 09:13:55.472430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.568 [2024-11-20 09:13:55.472477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.568 [2024-11-20 09:13:55.483213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fac10 00:20:16.827 [2024-11-20 09:13:55.484611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.484647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.494045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f5be8 00:20:16.827 [2024-11-20 09:13:55.495257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.495304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.507769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f2d80 00:20:16.827 [2024-11-20 09:13:55.509562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.509611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.515581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f35f0 00:20:16.827 [2024-11-20 09:13:55.516443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.516474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.529408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f5be8 00:20:16.827 [2024-11-20 09:13:55.531233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.531281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.537428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f5be8 00:20:16.827 [2024-11-20 09:13:55.538355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.538434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.552006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e4140 00:20:16.827 [2024-11-20 09:13:55.553858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.553908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.560391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eee38 00:20:16.827 [2024-11-20 09:13:55.561349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.561398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.574980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e9e10 00:20:16.827 [2024-11-20 09:13:55.576550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.576600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.585712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eaef0 00:20:16.827 [2024-11-20 09:13:55.587057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.587106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.596855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e1f80 00:20:16.827 [2024-11-20 09:13:55.598096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.598132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.610261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166efae0 00:20:16.827 [2024-11-20 09:13:55.612244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.612291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.618888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f4f40 00:20:16.827 [2024-11-20 09:13:55.619906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.619954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.633501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fcdd0 00:20:16.827 [2024-11-20 09:13:55.635214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.635262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.644654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f81e0 00:20:16.827 [2024-11-20 09:13:55.646101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.646136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.655823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f7970 00:20:16.827 [2024-11-20 09:13:55.657139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.657187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.667280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e84c0 00:20:16.827 [2024-11-20 09:13:55.668587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.668636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.679717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f2d80 00:20:16.827 [2024-11-20 09:13:55.681451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.681515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.687759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f35f0 00:20:16.827 [2024-11-20 09:13:55.688683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.688730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.702011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eea00 00:20:16.827 [2024-11-20 09:13:55.703794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.827 [2024-11-20 09:13:55.703846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:16.827 [2024-11-20 09:13:55.710198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eea00 00:20:16.828 [2024-11-20 09:13:55.711173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.828 [2024-11-20 09:13:55.711220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:16.828 [2024-11-20 09:13:55.723633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ee5c8 00:20:16.828 [2024-11-20 09:13:55.725130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.828 [2024-11-20 09:13:55.725193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:16.828 [2024-11-20 09:13:55.733883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f5be8 00:20:16.828 [2024-11-20 09:13:55.735143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:16.828 [2024-11-20 09:13:55.735191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.745523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e73e0 00:20:17.087 [2024-11-20 09:13:55.746912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.746988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.758884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ebb98 00:20:17.087 [2024-11-20 09:13:55.760587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.760635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.766559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f9f68 00:20:17.087 [2024-11-20 09:13:55.767511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.767558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.779462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f0788 00:20:17.087 [2024-11-20 09:13:55.780975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.781021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.789751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e95a0 00:20:17.087 [2024-11-20 09:13:55.791316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.791380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.800816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166feb58 00:20:17.087 [2024-11-20 09:13:55.802146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.802181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.813720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6fa8 00:20:17.087 [2024-11-20 09:13:55.815816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.815872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.821642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e99d8 00:20:17.087 [2024-11-20 09:13:55.822758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.822832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.835041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f7970 00:20:17.087 [2024-11-20 09:13:55.836580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.836627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.842959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e0a68 00:20:17.087 [2024-11-20 09:13:55.843718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:18103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.843750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.856148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fda78 00:20:17.087 [2024-11-20 09:13:55.857572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.857626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.867569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ebb98 00:20:17.087 [2024-11-20 09:13:55.868787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.868827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.880139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f0788 00:20:17.087 [2024-11-20 09:13:55.881719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.881774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.887887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fac10 00:20:17.087 [2024-11-20 09:13:55.888636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.888670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.900685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e5ec8 00:20:17.087 [2024-11-20 09:13:55.902129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.902164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.911064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fd208 00:20:17.087 [2024-11-20 09:13:55.912138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.912185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.921700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e3d08 00:20:17.087 [2024-11-20 09:13:55.922895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.922927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.934919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ee5c8 00:20:17.087 [2024-11-20 09:13:55.936575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.936621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.942774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e3060 00:20:17.087 [2024-11-20 09:13:55.943591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.943622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.956075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f7538 00:20:17.087 [2024-11-20 09:13:55.957493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.087 [2024-11-20 09:13:55.957541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:17.087 [2024-11-20 09:13:55.964729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f4298 00:20:17.087 [2024-11-20 09:13:55.965570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.088 [2024-11-20 09:13:55.965602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:17.088 [2024-11-20 09:13:55.975848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e9e10 00:20:17.088 [2024-11-20 09:13:55.976691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.088 [2024-11-20 09:13:55.976724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:17.088 [2024-11-20 09:13:55.988769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f5be8 00:20:17.088 [2024-11-20 09:13:55.989749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.088 [2024-11-20 09:13:55.989805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:17.088 [2024-11-20 09:13:55.999136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f8a50 00:20:17.088 [2024-11-20 09:13:56.000119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.088 [2024-11-20 09:13:56.000183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.010834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed4e8 00:20:17.347 [2024-11-20 09:13:56.011543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.011575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.021267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eea00 00:20:17.347 [2024-11-20 09:13:56.021862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.021905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.033936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eff18 00:20:17.347 [2024-11-20 09:13:56.035256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.035305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.044501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e73e0 00:20:17.347 [2024-11-20 09:13:56.045750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.045823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.055526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e3060 00:20:17.347 [2024-11-20 09:13:56.056916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.056946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.065870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166df988 00:20:17.347 [2024-11-20 09:13:56.066969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.067017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.076699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fa3a0 00:20:17.347 [2024-11-20 09:13:56.077750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.077823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.089874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ed0b0 00:20:17.347 [2024-11-20 09:13:56.091664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.091713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.097859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f8618 00:20:17.347 [2024-11-20 09:13:56.098645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.098677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.111136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e5ec8 00:20:17.347 [2024-11-20 09:13:56.112509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.112557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.119950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f46d0 00:20:17.347 [2024-11-20 09:13:56.120741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.120796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.134660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f6458 00:20:17.347 [2024-11-20 09:13:56.136081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.136113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.145875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166ea248 00:20:17.347 [2024-11-20 09:13:56.147055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.147106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.157846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166fdeb0 00:20:17.347 [2024-11-20 09:13:56.159034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.159065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.172290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e6300 00:20:17.347 [2024-11-20 09:13:56.174119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.174155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.180592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166dece0 00:20:17.347 [2024-11-20 09:13:56.181472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.181534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.193315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166eaef0 00:20:17.347 [2024-11-20 09:13:56.194827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.194902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.203611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166e1710 00:20:17.347 [2024-11-20 09:13:56.204820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.204893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:17.347 [2024-11-20 09:13:56.214902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f6020 00:20:17.347 [2024-11-20 09:13:56.216216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.216248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:17.347 22499.00 IOPS, 87.89 MiB/s [2024-11-20T09:13:56.266Z] [2024-11-20 09:13:56.226179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a34e40) with pdu=0x2000166f57b0 00:20:17.347 [2024-11-20 09:13:56.226908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:17.347 [2024-11-20 09:13:56.226936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:17.347 00:20:17.347 Latency(us) 00:20:17.347 [2024-11-20T09:13:56.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.347 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:17.347 nvme0n1 : 2.01 22517.93 87.96 0.00 0.00 5675.94 2412.92 16324.42 00:20:17.347 [2024-11-20T09:13:56.266Z] =================================================================================================================== 00:20:17.347 [2024-11-20T09:13:56.266Z] Total : 22517.93 87.96 0.00 0.00 5675.94 2412.92 16324.42 00:20:17.347 { 00:20:17.347 "results": [ 00:20:17.347 { 00:20:17.347 "job": "nvme0n1", 00:20:17.347 "core_mask": "0x2", 00:20:17.347 "workload": "randwrite", 00:20:17.347 "status": "finished", 00:20:17.347 "queue_depth": 128, 00:20:17.347 "io_size": 4096, 00:20:17.347 "runtime": 2.00609, 00:20:17.347 "iops": 22517.932894336744, 00:20:17.347 "mibps": 87.9606753685029, 00:20:17.347 "io_failed": 0, 00:20:17.347 "io_timeout": 0, 00:20:17.347 "avg_latency_us": 5675.935455249817, 00:20:17.347 "min_latency_us": 2412.9163636363637, 00:20:17.347 "max_latency_us": 16324.421818181818 00:20:17.348 } 00:20:17.348 ], 00:20:17.348 "core_count": 1 00:20:17.348 } 00:20:17.348 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:17.348 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:17.348 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:17.348 | .driver_specific 00:20:17.348 | .nvme_error 00:20:17.348 | .status_code 00:20:17.348 | .command_transient_transport_error' 00:20:17.348 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 177 > 0 )) 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93683 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93683 ']' 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93683 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93683 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:17.913 killing process with pid 93683 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:17.913 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93683' 00:20:17.913 Received shutdown signal, test time was about 2.000000 seconds 00:20:17.913 00:20:17.913 Latency(us) 00:20:17.913 [2024-11-20T09:13:56.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.913 [2024-11-20T09:13:56.832Z] =================================================================================================================== 00:20:17.913 [2024-11-20T09:13:56.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93683 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93683 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93758 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93758 /var/tmp/bperf.sock 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93758 ']' 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:17.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.914 09:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:18.172 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:18.172 Zero copy mechanism will not be used. 00:20:18.172 [2024-11-20 09:13:56.877204] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:18.172 [2024-11-20 09:13:56.877301] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93758 ] 00:20:18.172 [2024-11-20 09:13:57.022500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.172 [2024-11-20 09:13:57.070976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.430 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.430 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:18.430 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:18.430 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:18.688 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:18.688 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.688 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:18.688 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.688 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:18.688 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:18.946 nvme0n1 00:20:18.946 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:18.946 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.946 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:18.946 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.946 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:18.946 09:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:19.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:19.205 Zero copy mechanism will not be used. 00:20:19.205 Running I/O for 2 seconds... 00:20:19.205 [2024-11-20 09:13:57.959914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.960041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.960070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:57.965448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.965557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.965580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:57.970540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.970657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.970678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:57.975733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.975849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.975873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:57.981125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.981283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.981305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:57.986558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.986651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.986673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:57.992035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.992173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.992196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:57.997416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:57.997523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:57.997544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.002759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.002868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.002890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.008041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.008216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.008259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.013275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.013361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.013397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.018395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.018499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.018520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.023530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.023621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.023657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.029079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.029163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.029187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.034323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.034430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.034450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.039442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.205 [2024-11-20 09:13:58.039526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.205 [2024-11-20 09:13:58.039546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.205 [2024-11-20 09:13:58.044376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.044491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.044544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.049470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.049555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.049576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.054572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.054662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.054682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.059585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.059668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.059688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.064787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.064913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.064951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.069771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.069877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.069915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.074863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.074971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.075013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.079711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.079823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.079844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.085145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.085247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.085268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.090535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.090611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.090637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.095564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.095654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.095675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.100621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.100735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.100755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.105587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.105700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.105722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.111047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.111161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.111183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.206 [2024-11-20 09:13:58.116147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.206 [2024-11-20 09:13:58.116219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.206 [2024-11-20 09:13:58.116242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.121444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.121514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.121537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.126720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.126822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.126846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.131881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.131973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.132011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.136879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.136970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.136991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.141649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.141746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.141783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.146641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.146732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.146753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.151499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.151590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.151610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.156466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.156555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.156575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.161466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.161557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.161577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.166441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.166545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.166566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.171382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.171476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.171496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.176320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.176409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.181204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.181287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.181308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.186307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.186414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.186436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.191529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.466 [2024-11-20 09:13:58.191624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.466 [2024-11-20 09:13:58.191646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.466 [2024-11-20 09:13:58.196758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.196893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.196916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.202283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.202377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.202399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.207582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.207673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.207694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.212836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.212915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.212937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.217999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.218084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.218106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.223211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.223294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.223314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.228378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.228468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.228489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.233576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.233672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.233692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.238567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.238642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.238664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.243357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.243450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.243470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.248100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.248187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.248207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.253108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.253197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.253217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.257875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.257994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.258016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.262709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.262805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.262825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.267511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.267585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.267605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.272304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.272406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.272427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.277111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.277192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.277212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.281854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.281990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.282011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.287317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.287418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.287439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.292530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.292621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.292641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.297264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.297353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.297373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.302006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.302113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.302134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.306828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.306942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.306963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.311647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.311739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.311759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.316433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.316530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.316550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.321214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.321303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.321323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.325925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.326048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.467 [2024-11-20 09:13:58.326069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.467 [2024-11-20 09:13:58.330678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.467 [2024-11-20 09:13:58.330783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.330804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.335515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.335601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.335620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.340415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.340505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.340525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.345222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.345314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.345335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.349992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.350078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.350100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.354800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.354887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.354908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.359562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.359651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.359671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.364432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.364521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.364561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.369219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.369306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.369326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.373968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.374065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.374086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.468 [2024-11-20 09:13:58.378911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.468 [2024-11-20 09:13:58.379003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.468 [2024-11-20 09:13:58.379023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.384186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.384270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.384291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.389401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.389491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.389511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.394135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.394224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.394247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.398991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.399082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.399103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.403805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.403892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.403912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.408542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.408629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.408649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.413384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.413473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.413493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.418226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.418339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.418359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.423033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.423126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.423146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.427795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.427904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.427924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.432596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.432684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.432703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.437455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.437545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.437565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.442364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.442470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.442490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.447210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.447297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.447317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.452412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.452539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.452559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.457706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.457799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.457830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.462531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.462624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.462644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.728 [2024-11-20 09:13:58.467484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.728 [2024-11-20 09:13:58.467557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.728 [2024-11-20 09:13:58.467578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.472258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.472347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.472367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.477194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.477272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.477292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.482105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.482179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.482201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.486989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.487075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.487096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.491811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.491916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.491936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.496474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.496546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.496566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.501230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.501319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.501339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.506111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.506189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.506212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.511198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.511290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.511328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.516132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.516218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.516238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.520995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.521089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.521111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.525853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.525961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.525999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.530868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.530994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.531014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.535823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.535940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.535961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.540547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.540624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.540661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.545352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.545442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.545463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.550158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.550250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.550287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.555097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.555181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.555201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.559894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.559984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.560004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.564621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.564696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.564715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.569474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.569565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.569586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.574302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.574407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.574427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.579382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.579490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.579526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.584428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.584523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.584543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.589281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.589376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.589397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.594782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.594882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.594903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.729 [2024-11-20 09:13:58.599989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.729 [2024-11-20 09:13:58.600076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.729 [2024-11-20 09:13:58.600097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.604875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.604982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.605002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.609717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.609845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.609866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.614622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.614695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.614716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.619612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.619695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.619715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.624602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.624694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.624713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.629508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.629581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.629602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.634352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.634456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.634475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.730 [2024-11-20 09:13:58.639205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.730 [2024-11-20 09:13:58.639292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.730 [2024-11-20 09:13:58.639313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.644607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.644683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.644704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.649634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.649770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.649793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.654653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.654744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.654764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.659440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.659525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.659546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.664345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.664440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.664460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.669242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.669331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.669350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.674055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.674144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.678911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.679011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.679032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.684117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.684207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.684227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.689253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.689326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.689345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.694369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.694497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.694518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.699718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.699832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.699854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.705018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.705105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.705125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.710151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.710229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.710278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.715316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.715422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.715443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.720405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.720510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.720531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.725423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.725530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.725550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.730200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.730299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.730319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.735026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.735117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.735138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.990 [2024-11-20 09:13:58.739643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.990 [2024-11-20 09:13:58.739735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.990 [2024-11-20 09:13:58.739755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.744480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.744568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.744588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.749293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.749390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.749411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.754278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.754389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.754424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.759373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.759481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.759502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.764412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.764503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.764523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.769676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.769764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.769833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.774917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.775006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.775026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.780076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.780149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.780169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.785040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.785112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.785132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.789742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.789845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.789865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.794694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.794764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.794800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.799632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.799705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.799725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.804621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.804693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.804714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.809456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.809531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.809551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.814501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.814566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.814587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.819390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.819463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.819482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.824381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.824464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.824486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.829158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.829228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.829247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.833869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.833972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.833993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.838844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.838935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.838954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.991 [2024-11-20 09:13:58.843715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.991 [2024-11-20 09:13:58.843797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.991 [2024-11-20 09:13:58.843818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.848555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.848628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.848648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.853798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.853891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.853913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.859147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.859225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.859245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.864095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.864169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.864190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.869103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.869176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.869211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.874215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.874317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.874338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.879104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.879177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.879196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.884185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.884260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.884280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.888921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.889032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.889052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.894233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.894335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.894356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.899431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.899505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.899525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:19.992 [2024-11-20 09:13:58.904651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:19.992 [2024-11-20 09:13:58.904728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:19.992 [2024-11-20 09:13:58.904751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.909765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.909869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.909889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.915093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.915166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.915187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.919876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.919946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.919966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.924835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.924910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.924930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.929755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.929891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.929911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.934630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.934703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.934723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.939497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.939571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.939591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.944302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.944391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.944411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.949054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.949140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.949160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.953672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.955350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.955397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.252 6169.00 IOPS, 771.12 MiB/s [2024-11-20T09:13:59.171Z] [2024-11-20 09:13:58.959945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.960064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.960102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.964988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.252 [2024-11-20 09:13:58.965102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.252 [2024-11-20 09:13:58.965123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.252 [2024-11-20 09:13:58.970154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:58.970227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:58.970250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:58.975165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:58.975266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:58.975287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:58.980158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:58.980246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:58.980266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:58.984878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:58.984967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:58.984987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:58.989562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:58.989633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:58.989654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:58.994305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:58.994396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:58.994432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:58.999058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:58.999166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:58.999202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.003915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.004019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.004039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.008828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.008915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.008935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.013643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.013771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.013820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.018659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.018787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.018808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.023448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.023555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.023576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.028300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.028405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.028426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.033164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.033252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.033272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.037890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.038020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.038042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.042737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.042850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.042883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.047500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.047578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.047598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.052313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.052409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.052430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.057172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.057261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.057282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.061902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.062008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.062030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.066749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.066889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.066910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.071581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.071674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.071693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.076418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.076505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.076525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.253 [2024-11-20 09:13:59.081244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.253 [2024-11-20 09:13:59.081351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.253 [2024-11-20 09:13:59.081370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.086132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.086230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.086252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.090864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.090973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.090992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.095625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.095713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.095733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.100500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.100574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.100594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.105318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.105409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.105430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.110470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.110579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.110600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.115825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.115961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.115983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.120766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.120858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.120878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.125456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.125550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.125570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.130221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.130327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.130347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.134990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.135081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.135102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.139818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.139903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.139923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.144511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.144601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.144620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.149257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.149349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.149369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.154139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.154236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.154272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.159089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.159196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.159217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.254 [2024-11-20 09:13:59.164006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.254 [2024-11-20 09:13:59.164091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.254 [2024-11-20 09:13:59.164112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.516 [2024-11-20 09:13:59.169387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.516 [2024-11-20 09:13:59.169480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.516 [2024-11-20 09:13:59.169501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.516 [2024-11-20 09:13:59.174553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.516 [2024-11-20 09:13:59.174676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.516 [2024-11-20 09:13:59.174697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.516 [2024-11-20 09:13:59.179412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.516 [2024-11-20 09:13:59.179514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.516 [2024-11-20 09:13:59.179533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.516 [2024-11-20 09:13:59.184356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.516 [2024-11-20 09:13:59.184457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.516 [2024-11-20 09:13:59.184478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.516 [2024-11-20 09:13:59.189167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.516 [2024-11-20 09:13:59.189276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.516 [2024-11-20 09:13:59.189296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.516 [2024-11-20 09:13:59.194090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.516 [2024-11-20 09:13:59.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.516 [2024-11-20 09:13:59.194190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.516 [2024-11-20 09:13:59.198940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.516 [2024-11-20 09:13:59.199055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.517 [2024-11-20 09:13:59.199075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.517 [2024-11-20 09:13:59.203696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.517 [2024-11-20 09:13:59.203773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.517 [2024-11-20 09:13:59.203822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.517 [2024-11-20 09:13:59.208656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.517 [2024-11-20 09:13:59.208765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.517 [2024-11-20 09:13:59.208818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.517 [2024-11-20 09:13:59.213899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.517 [2024-11-20 09:13:59.213992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.517 [2024-11-20 09:13:59.214014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.517 [2024-11-20 09:13:59.219293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.517 [2024-11-20 09:13:59.219404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.517 [2024-11-20 09:13:59.219426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.517 [2024-11-20 09:13:59.224646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.517 [2024-11-20 09:13:59.224733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.518 [2024-11-20 09:13:59.224755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.518 [2024-11-20 09:13:59.230164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.518 [2024-11-20 09:13:59.230243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.518 [2024-11-20 09:13:59.230266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.518 [2024-11-20 09:13:59.235437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.518 [2024-11-20 09:13:59.235526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.518 [2024-11-20 09:13:59.235547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.518 [2024-11-20 09:13:59.240829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.518 [2024-11-20 09:13:59.240920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.518 [2024-11-20 09:13:59.240942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.518 [2024-11-20 09:13:59.246082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.518 [2024-11-20 09:13:59.246164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.518 [2024-11-20 09:13:59.246186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.518 [2024-11-20 09:13:59.251307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.518 [2024-11-20 09:13:59.251413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.251434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.256542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.256633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.256653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.261509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.261635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.261656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.266519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.266609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.266630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.271432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.271523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.271543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.276468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.276559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.276580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.281379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.281462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.281483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.286357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.286462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.286483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.291211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.291299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.291319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.519 [2024-11-20 09:13:59.296110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.519 [2024-11-20 09:13:59.296200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.519 [2024-11-20 09:13:59.296221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.520 [2024-11-20 09:13:59.301035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.520 [2024-11-20 09:13:59.301131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.520 [2024-11-20 09:13:59.301153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.520 [2024-11-20 09:13:59.306005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.520 [2024-11-20 09:13:59.306099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.520 [2024-11-20 09:13:59.306120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.520 [2024-11-20 09:13:59.310965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.520 [2024-11-20 09:13:59.311057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.520 [2024-11-20 09:13:59.311077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.520 [2024-11-20 09:13:59.315857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.520 [2024-11-20 09:13:59.315966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.520 [2024-11-20 09:13:59.315987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.520 [2024-11-20 09:13:59.321004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.520 [2024-11-20 09:13:59.321095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.520 [2024-11-20 09:13:59.321115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.520 [2024-11-20 09:13:59.326009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.521 [2024-11-20 09:13:59.326096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.521 [2024-11-20 09:13:59.326117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.521 [2024-11-20 09:13:59.330874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.521 [2024-11-20 09:13:59.330969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.521 [2024-11-20 09:13:59.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.521 [2024-11-20 09:13:59.335924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.521 [2024-11-20 09:13:59.336062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.521 [2024-11-20 09:13:59.336083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.521 [2024-11-20 09:13:59.340902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.521 [2024-11-20 09:13:59.340996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.521 [2024-11-20 09:13:59.341017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.521 [2024-11-20 09:13:59.345832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.521 [2024-11-20 09:13:59.345923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.521 [2024-11-20 09:13:59.345983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.521 [2024-11-20 09:13:59.350931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.521 [2024-11-20 09:13:59.351025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.521 [2024-11-20 09:13:59.351045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.521 [2024-11-20 09:13:59.355954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-20 09:13:59.356043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-20 09:13:59.356064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.522 [2024-11-20 09:13:59.360883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-20 09:13:59.360980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-20 09:13:59.361000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.522 [2024-11-20 09:13:59.365817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-20 09:13:59.365910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-20 09:13:59.365958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.522 [2024-11-20 09:13:59.371121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-20 09:13:59.371211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-20 09:13:59.371231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.522 [2024-11-20 09:13:59.376134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-20 09:13:59.376233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-20 09:13:59.376254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.522 [2024-11-20 09:13:59.381103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.522 [2024-11-20 09:13:59.381214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.522 [2024-11-20 09:13:59.381235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.522 [2024-11-20 09:13:59.386113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-20 09:13:59.386206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-20 09:13:59.386242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.523 [2024-11-20 09:13:59.391059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-20 09:13:59.391168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-20 09:13:59.391189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.523 [2024-11-20 09:13:59.396066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-20 09:13:59.396173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-20 09:13:59.396195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.523 [2024-11-20 09:13:59.401170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-20 09:13:59.401260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-20 09:13:59.401281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.523 [2024-11-20 09:13:59.406130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-20 09:13:59.406210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-20 09:13:59.406246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.523 [2024-11-20 09:13:59.411007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-20 09:13:59.411098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.523 [2024-11-20 09:13:59.411118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.523 [2024-11-20 09:13:59.415913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.523 [2024-11-20 09:13:59.415992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-20 09:13:59.416015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.524 [2024-11-20 09:13:59.421482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-20 09:13:59.421572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-20 09:13:59.421592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.524 [2024-11-20 09:13:59.426859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.524 [2024-11-20 09:13:59.426957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.524 [2024-11-20 09:13:59.426993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.432479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.432574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.432595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.437380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.437452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.437472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.442462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.442551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.442571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.447654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.447748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.447768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.452782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.452887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.452908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.458245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.458386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.458437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.463505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.463595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.463615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.468410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.468503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.468522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.473357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.473446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.473466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.478215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.478316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.478337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.483211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.483301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.483322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.488164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.488254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.488274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.493189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.493280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.493300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.498545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.498620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.498640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.503610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.503723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.503743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.792 [2024-11-20 09:13:59.508495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.792 [2024-11-20 09:13:59.508586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.792 [2024-11-20 09:13:59.508606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.513233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.513327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.513347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.518441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.518529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.518550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.523509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.523595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.523615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.528398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.528503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.528524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.533378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.533481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.533501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.538468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.538552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.538572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.543464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.543555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.543575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.548448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.548559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.548579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.553518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.553592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.553613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.558542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.558609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.558629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.563290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.563378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.563399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.568138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.568225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.568245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.572957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.573045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.573065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.577654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.577744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.577764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.582380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.582468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.582488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.587315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.587408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.587429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.592100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.592187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.592208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.596885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.596973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.596993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.601564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.601641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.601662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.606386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.606489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.606509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.611245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.611332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.611352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.615987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.616075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.616095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.620724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.620829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.620849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.625472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.625561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.625581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.630329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.630435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.630455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.635055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.635143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.635163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.793 [2024-11-20 09:13:59.639850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.793 [2024-11-20 09:13:59.639936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.793 [2024-11-20 09:13:59.639956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.644561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.644651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.644671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.649605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.649700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.649721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.654598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.654728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.654750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.659515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.659604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.659624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.664440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.664532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.664552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.669483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.669576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.669596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.674699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.674804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.674825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.679523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.679610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.679631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.684762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.684884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.684904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.690099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.690170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.690192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.694922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.695014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.695034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.699803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.699906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.699926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.794 [2024-11-20 09:13:59.704943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:20.794 [2024-11-20 09:13:59.705031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.794 [2024-11-20 09:13:59.705051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.054 [2024-11-20 09:13:59.710324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.054 [2024-11-20 09:13:59.710463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.054 [2024-11-20 09:13:59.710484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.054 [2024-11-20 09:13:59.715643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.054 [2024-11-20 09:13:59.715733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.054 [2024-11-20 09:13:59.715755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.054 [2024-11-20 09:13:59.721133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.054 [2024-11-20 09:13:59.721220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.054 [2024-11-20 09:13:59.721240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.726524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.726603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.726625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.731637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.731731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.731752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.736879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.736973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.736993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.742086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.742161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.742183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.747130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.747234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.747255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.751962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.752052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.752072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.756652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.756740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.756760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.761474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.761561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.761581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.766337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.766439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.766459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.771219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.771305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.771325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.776106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.776194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.776215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.780887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.780992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.781013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.785640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.785720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.785740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.790564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.790669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.790690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.795582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.795683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.795704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.800490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.800584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.800605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.805325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.805415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.805435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.810129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.810219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.810240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.814995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.815088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.819757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.819864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.819884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.824534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.824623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.824643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.829359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.829449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.829469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.834206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.834320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.834341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.839125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.839229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.839249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.843933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.844033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.844053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.848807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.848892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.848912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.055 [2024-11-20 09:13:59.853572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.055 [2024-11-20 09:13:59.853663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.055 [2024-11-20 09:13:59.853683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.858457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.858551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.858571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.863356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.863445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.863464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.868209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.868304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.868324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.873018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.873104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.873124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.877802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.877894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.877914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.882727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.882868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.882889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.887516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.887622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.887643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.892390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.892484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.892504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.897154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.897281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.897302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.901962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.902056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.902078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.906672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.906782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.906803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.911517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.911597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.911616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.916445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.916538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.916558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.921311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.921400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.921419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.926251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.926370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.926406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.931535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.931627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.931649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.936632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.936734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.936755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.941956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.942054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.942077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.947327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.947435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.947466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:21.056 [2024-11-20 09:13:59.952726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a35180) with pdu=0x2000166ff3c8 00:20:21.056 [2024-11-20 09:13:59.952887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:21.056 [2024-11-20 09:13:59.952923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:21.056 6203.00 IOPS, 775.38 MiB/s 00:20:21.056 Latency(us) 00:20:21.056 [2024-11-20T09:13:59.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.056 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:21.056 nvme0n1 : 2.00 6200.10 775.01 0.00 0.00 2575.02 2040.55 9472.93 00:20:21.056 [2024-11-20T09:13:59.975Z] =================================================================================================================== 00:20:21.056 [2024-11-20T09:13:59.975Z] Total : 6200.10 775.01 0.00 0.00 2575.02 2040.55 9472.93 00:20:21.056 { 00:20:21.056 "results": [ 00:20:21.056 { 00:20:21.056 "job": "nvme0n1", 00:20:21.056 "core_mask": "0x2", 00:20:21.056 "workload": "randwrite", 00:20:21.056 "status": "finished", 00:20:21.056 "queue_depth": 16, 00:20:21.056 "io_size": 131072, 00:20:21.056 "runtime": 2.003515, 00:20:21.056 "iops": 6200.1033184178805, 00:20:21.056 "mibps": 775.0129148022351, 00:20:21.056 "io_failed": 0, 00:20:21.056 "io_timeout": 0, 00:20:21.056 "avg_latency_us": 2575.01654835263, 00:20:21.056 "min_latency_us": 2040.5527272727272, 00:20:21.056 "max_latency_us": 9472.930909090908 00:20:21.056 } 00:20:21.056 ], 00:20:21.056 "core_count": 1 00:20:21.056 } 00:20:21.315 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:21.315 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:21.315 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:21.315 | .driver_specific 00:20:21.315 | .nvme_error 00:20:21.315 | .status_code 00:20:21.315 | .command_transient_transport_error' 00:20:21.315 09:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 401 > 0 )) 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93758 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93758 ']' 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93758 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93758 00:20:21.574 killing process with pid 93758 00:20:21.574 Received shutdown signal, test time was about 2.000000 seconds 00:20:21.574 00:20:21.574 Latency(us) 00:20:21.574 [2024-11-20T09:14:00.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.574 [2024-11-20T09:14:00.493Z] =================================================================================================================== 00:20:21.574 [2024-11-20T09:14:00.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93758' 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93758 00:20:21.574 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93758 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93489 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93489 ']' 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93489 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93489 00:20:21.833 killing process with pid 93489 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93489' 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93489 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93489 00:20:21.833 ************************************ 00:20:21.833 END TEST nvmf_digest_error 00:20:21.833 ************************************ 00:20:21.833 00:20:21.833 real 0m16.076s 00:20:21.833 user 0m31.152s 00:20:21.833 sys 0m4.410s 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.833 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:22.092 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:22.092 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:22.093 rmmod nvme_tcp 00:20:22.093 rmmod nvme_fabrics 00:20:22.093 rmmod nvme_keyring 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 93489 ']' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 93489 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 93489 ']' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 93489 00:20:22.093 Process with pid 93489 is not found 00:20:22.093 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (93489) - No such process 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 93489 is not found' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:22.093 09:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # continue 00:20:22.352 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:20:22.353 00:20:22.353 real 0m33.616s 00:20:22.353 user 1m3.668s 00:20:22.353 sys 0m9.205s 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:22.353 ************************************ 00:20:22.353 END TEST nvmf_digest 00:20:22.353 ************************************ 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.353 ************************************ 00:20:22.353 START TEST nvmf_host_discovery 00:20:22.353 ************************************ 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:20:22.353 * Looking for test storage... 00:20:22.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.353 --rc genhtml_branch_coverage=1 00:20:22.353 --rc genhtml_function_coverage=1 00:20:22.353 --rc genhtml_legend=1 00:20:22.353 --rc geninfo_all_blocks=1 00:20:22.353 --rc geninfo_unexecuted_blocks=1 00:20:22.353 00:20:22.353 ' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.353 --rc genhtml_branch_coverage=1 00:20:22.353 --rc genhtml_function_coverage=1 00:20:22.353 --rc genhtml_legend=1 00:20:22.353 --rc geninfo_all_blocks=1 00:20:22.353 --rc geninfo_unexecuted_blocks=1 00:20:22.353 00:20:22.353 ' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.353 --rc genhtml_branch_coverage=1 00:20:22.353 --rc genhtml_function_coverage=1 00:20:22.353 --rc genhtml_legend=1 00:20:22.353 --rc geninfo_all_blocks=1 00:20:22.353 --rc geninfo_unexecuted_blocks=1 00:20:22.353 00:20:22.353 ' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.353 --rc genhtml_branch_coverage=1 00:20:22.353 --rc genhtml_function_coverage=1 00:20:22.353 --rc genhtml_legend=1 00:20:22.353 --rc geninfo_all_blocks=1 00:20:22.353 --rc geninfo_unexecuted_blocks=1 00:20:22.353 00:20:22.353 ' 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:22.353 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.354 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.354 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.354 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.354 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.354 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:22.354 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.354 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:22.614 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # DISCOVERY_PORT=8009 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@15 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:22.614 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@18 -- # HOST_SOCK=/tmp/host.sock 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # nvmftestinit 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:22.615 10.0.0.1 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:22.615 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:22.616 10.0.0.2 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:22.616 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:22.877 10.0.0.3 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:22.877 10.0.0.4 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:22.877 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:22.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:20:22.878 00:20:22.878 --- 10.0.0.1 ping statistics --- 00:20:22.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.878 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:22.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:20:22.878 00:20:22.878 --- 10.0.0.2 ping statistics --- 00:20:22.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.878 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:22.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:20:22.878 00:20:22.878 --- 10.0.0.3 ping statistics --- 00:20:22.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.878 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:22.878 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:22.878 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:20:22.878 00:20:22.878 --- 10.0.0.4 ping statistics --- 00:20:22.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.878 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # return 0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:20:22.878 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target0 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo target1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmfappstart -m 0x2 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=94089 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 94089 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 94089 ']' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.879 09:14:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.138 [2024-11-20 09:14:01.842854] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:23.138 [2024-11-20 09:14:01.842943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.138 [2024-11-20 09:14:01.998224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.397 [2024-11-20 09:14:02.061467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.397 [2024-11-20 09:14:02.061535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.397 [2024-11-20 09:14:02.061549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.397 [2024-11-20 09:14:02.061560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.397 [2024-11-20 09:14:02.061569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.397 [2024-11-20 09:14:02.062069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.397 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.397 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:23.397 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 [2024-11-20 09:14:02.249309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 [2024-11-20 09:14:02.261516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 null0 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@31 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 null1 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd bdev_wait_for_examine 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@40 -- # hostpid=94126 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@41 -- # waitforlisten 94126 /tmp/host.sock 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 94126 ']' 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:23.398 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.398 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.658 [2024-11-20 09:14:02.361007] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:23.658 [2024-11-20 09:14:02.361109] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94126 ] 00:20:23.658 [2024-11-20 09:14:02.514983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.658 [2024-11-20 09:14:02.570445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@43 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # notify_id=0 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # get_subsystem_names 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.956 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # get_bdev_list 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # get_subsystem_names 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:23.957 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.215 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:20:24.215 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_bdev_list 00:20:24.215 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.215 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:24.215 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.215 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # get_subsystem_names 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:24.216 09:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_bdev_list 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.216 [2024-11-20 09:14:03.085607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_subsystem_names 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:24.216 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # get_bdev_list 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@94 -- # is_notification_count_eq 0 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=0 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@100 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:20:24.475 09:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:20:25.043 [2024-11-20 09:14:03.722441] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:25.043 [2024-11-20 09:14:03.722506] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:25.043 [2024-11-20 09:14:03.722526] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:25.043 [2024-11-20 09:14:03.808620] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:25.043 [2024-11-20 09:14:03.863028] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:20:25.043 [2024-11-20 09:14:03.863866] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x990ba0:1 started. 00:20:25.043 [2024-11-20 09:14:03.865756] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:25.043 [2024-11-20 09:14:03.865810] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:25.043 [2024-11-20 09:14:03.870991] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x990ba0 was disconnected and freed. delete nvme_qpair. 00:20:25.611 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.611 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@101 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@102 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # is_notification_count_eq 1 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=1 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:25.872 [2024-11-20 09:14:04.564620] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x90b570:1 started. 00:20:25.872 [2024-11-20 09:14:04.571150] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x90b570 was disconnected and freed. delete nvme_qpair. 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@109 -- # is_notification_count_eq 1 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.872 [2024-11-20 09:14:04.674471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:25.872 [2024-11-20 09:14:04.675347] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:25.872 [2024-11-20 09:14:04.675398] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@115 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@116 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:25.872 [2024-11-20 09:14:04.761840] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:20:25.872 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@117 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.131 [2024-11-20 09:14:04.825270] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:20:26.131 [2024-11-20 09:14:04.825341] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:26.131 [2024-11-20 09:14:04.825354] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:26.131 [2024-11-20 09:14:04.825360] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:20:26.131 09:14:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # is_notification_count_eq 0 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.068 [2024-11-20 09:14:05.967724] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:20:27.068 [2024-11-20 09:14:05.967760] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@124 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:20:27.068 [2024-11-20 09:14:05.973402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.068 [2024-11-20 09:14:05.973449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.068 [2024-11-20 09:14:05.973478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.068 [2024-11-20 09:14:05.973488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.068 [2024-11-20 09:14:05.973497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.068 [2024-11-20 09:14:05.973506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.068 [2024-11-20 09:14:05.973517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.068 [2024-11-20 09:14:05.973525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.068 [2024-11-20 09:14:05.973535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.068 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:27.069 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.069 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:27.069 [2024-11-20 09:14:05.983397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.329 [2024-11-20 09:14:05.993413] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:27.329 [2024-11-20 09:14:05.993436] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:27.329 [2024-11-20 09:14:05.993446] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:05.993469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:27.329 [2024-11-20 09:14:05.993504] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:05.993588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.329 [2024-11-20 09:14:05.993611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963280 with addr=10.0.0.2, port=4420 00:20:27.329 [2024-11-20 09:14:05.993622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.329 [2024-11-20 09:14:05.993640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.329 [2024-11-20 09:14:05.993655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:27.329 [2024-11-20 09:14:05.993665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:27.329 [2024-11-20 09:14:05.993676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:27.329 [2024-11-20 09:14:05.993685] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:27.329 [2024-11-20 09:14:05.993691] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:27.329 [2024-11-20 09:14:05.993700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:27.329 09:14:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.329 [2024-11-20 09:14:06.003514] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:27.329 [2024-11-20 09:14:06.003536] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:27.329 [2024-11-20 09:14:06.003542] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:06.003547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:27.329 [2024-11-20 09:14:06.003591] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:06.003649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.329 [2024-11-20 09:14:06.003670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963280 with addr=10.0.0.2, port=4420 00:20:27.329 [2024-11-20 09:14:06.003681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.329 [2024-11-20 09:14:06.003707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.329 [2024-11-20 09:14:06.003722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:27.329 [2024-11-20 09:14:06.003730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:27.329 [2024-11-20 09:14:06.003740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:27.329 [2024-11-20 09:14:06.003748] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:27.329 [2024-11-20 09:14:06.003754] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:27.329 [2024-11-20 09:14:06.003759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:27.329 [2024-11-20 09:14:06.013601] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:27.329 [2024-11-20 09:14:06.013812] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:27.329 [2024-11-20 09:14:06.013836] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:06.013842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:27.329 [2024-11-20 09:14:06.013874] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:06.013999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.329 [2024-11-20 09:14:06.014023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963280 with addr=10.0.0.2, port=4420 00:20:27.329 [2024-11-20 09:14:06.014034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.329 [2024-11-20 09:14:06.014051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.329 [2024-11-20 09:14:06.014065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:27.329 [2024-11-20 09:14:06.014075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:27.329 [2024-11-20 09:14:06.014084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:27.329 [2024-11-20 09:14:06.014093] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:27.329 [2024-11-20 09:14:06.014099] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:27.329 [2024-11-20 09:14:06.014104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:27.329 [2024-11-20 09:14:06.023885] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:27.329 [2024-11-20 09:14:06.023906] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:27.329 [2024-11-20 09:14:06.023912] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:06.023916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:27.329 [2024-11-20 09:14:06.023959] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:27.329 [2024-11-20 09:14:06.024010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.329 [2024-11-20 09:14:06.024028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963280 with addr=10.0.0.2, port=4420 00:20:27.329 [2024-11-20 09:14:06.024038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.329 [2024-11-20 09:14:06.024052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.329 [2024-11-20 09:14:06.024065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:27.329 [2024-11-20 09:14:06.024074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:27.329 [2024-11-20 09:14:06.024082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:27.329 [2024-11-20 09:14:06.024090] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:27.329 [2024-11-20 09:14:06.024095] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:27.329 [2024-11-20 09:14:06.024099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:27.329 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.329 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.329 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@125 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:27.330 [2024-11-20 09:14:06.033968] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:27.330 [2024-11-20 09:14:06.033993] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:27.330 [2024-11-20 09:14:06.033999] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:27.330 [2024-11-20 09:14:06.034004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:27.330 [2024-11-20 09:14:06.034028] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:27.330 [2024-11-20 09:14:06.034078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.330 [2024-11-20 09:14:06.034098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963280 with addr=10.0.0.2, port=4420 00:20:27.330 [2024-11-20 09:14:06.034108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.330 [2024-11-20 09:14:06.034123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.330 [2024-11-20 09:14:06.034137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:27.330 [2024-11-20 09:14:06.034146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:27.330 [2024-11-20 09:14:06.034155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:27.330 [2024-11-20 09:14:06.034163] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:27.330 [2024-11-20 09:14:06.034168] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:27.330 [2024-11-20 09:14:06.034173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:27.330 [2024-11-20 09:14:06.044247] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:27.330 [2024-11-20 09:14:06.044288] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:27.330 [2024-11-20 09:14:06.044294] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:27.330 [2024-11-20 09:14:06.044299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:27.330 [2024-11-20 09:14:06.044339] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:27.330 [2024-11-20 09:14:06.044408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.330 [2024-11-20 09:14:06.044428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963280 with addr=10.0.0.2, port=4420 00:20:27.330 [2024-11-20 09:14:06.044439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.330 [2024-11-20 09:14:06.044481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.330 [2024-11-20 09:14:06.044496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:27.330 [2024-11-20 09:14:06.044505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:27.330 [2024-11-20 09:14:06.044514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:27.330 [2024-11-20 09:14:06.044523] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:27.330 [2024-11-20 09:14:06.044528] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:27.330 [2024-11-20 09:14:06.044533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:27.330 [2024-11-20 09:14:06.054348] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:27.330 [2024-11-20 09:14:06.054369] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:27.330 [2024-11-20 09:14:06.054375] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:27.330 [2024-11-20 09:14:06.054380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:27.330 [2024-11-20 09:14:06.054422] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:27.330 [2024-11-20 09:14:06.054490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:27.330 [2024-11-20 09:14:06.054509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x963280 with addr=10.0.0.2, port=4420 00:20:27.330 [2024-11-20 09:14:06.054519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x963280 is same with the state(6) to be set 00:20:27.330 [2024-11-20 09:14:06.054534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x963280 (9): Bad file descriptor 00:20:27.330 [2024-11-20 09:14:06.054548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:27.330 [2024-11-20 09:14:06.054557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:27.330 [2024-11-20 09:14:06.054566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:27.330 [2024-11-20 09:14:06.054574] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:27.330 [2024-11-20 09:14:06.054579] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:27.330 [2024-11-20 09:14:06.054584] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:27.330 [2024-11-20 09:14:06.055269] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:20:27.330 [2024-11-20 09:14:06.055296] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@126 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # is_notification_count_eq 0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.330 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:20:27.331 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@133 -- # is_notification_count_eq 2 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=2 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=2 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=4 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.590 09:14:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.528 [2024-11-20 09:14:07.403252] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:28.528 [2024-11-20 09:14:07.403279] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:28.528 [2024-11-20 09:14:07.403298] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:28.786 [2024-11-20 09:14:07.489386] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:20:28.786 [2024-11-20 09:14:07.547838] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:20:28.786 [2024-11-20 09:14:07.548472] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x99c900:1 started. 00:20:28.786 [2024-11-20 09:14:07.551014] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:28.786 [2024-11-20 09:14:07.551072] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:20:28.786 [2024-11-20 09:14:07.552395] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x99c900 was disconnected and freed. delete nvme_qpair. 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.786 2024/11/20 09:14:07 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:20:28.786 request: 00:20:28.786 { 00:20:28.786 "method": "bdev_nvme_start_discovery", 00:20:28.786 "params": { 00:20:28.786 "name": "nvme", 00:20:28.786 "trtype": "tcp", 00:20:28.786 "traddr": "10.0.0.2", 00:20:28.786 "adrfam": "ipv4", 00:20:28.786 "trsvcid": "8009", 00:20:28.786 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:28.786 "wait_for_attach": true 00:20:28.786 } 00:20:28.786 } 00:20:28.786 Got JSON-RPC error response 00:20:28.786 GoRPCClient: error on JSON-RPC call 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # get_discovery_ctrlrs 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # [[ nvme == \n\v\m\e ]] 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # get_bdev_list 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.786 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:28.786 2024/11/20 09:14:07 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:20:28.786 request: 00:20:28.786 { 00:20:28.786 "method": "bdev_nvme_start_discovery", 00:20:28.786 "params": { 00:20:28.786 "name": "nvme_second", 00:20:28.786 "trtype": "tcp", 00:20:28.786 "traddr": "10.0.0.2", 00:20:28.787 "adrfam": "ipv4", 00:20:28.787 "trsvcid": "8009", 00:20:28.787 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:28.787 "wait_for_attach": true 00:20:28.787 } 00:20:28.787 } 00:20:28.787 Got JSON-RPC error response 00:20:28.787 GoRPCClient: error on JSON-RPC call 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:20:28.787 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # get_bdev_list 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.045 09:14:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:29.980 [2024-11-20 09:14:08.821232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:29.980 [2024-11-20 09:14:08.821318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c720 with addr=10.0.0.2, port=8010 00:20:29.980 [2024-11-20 09:14:08.821343] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:29.980 [2024-11-20 09:14:08.821354] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:29.980 [2024-11-20 09:14:08.821362] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:30.913 [2024-11-20 09:14:09.821217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:30.913 [2024-11-20 09:14:09.821300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99c720 with addr=10.0.0.2, port=8010 00:20:30.913 [2024-11-20 09:14:09.821325] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:30.913 [2024-11-20 09:14:09.821336] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:30.913 [2024-11-20 09:14:09.821345] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:20:32.289 [2024-11-20 09:14:10.821059] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:20:32.290 2024/11/20 09:14:10 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:20:32.290 request: 00:20:32.290 { 00:20:32.290 "method": "bdev_nvme_start_discovery", 00:20:32.290 "params": { 00:20:32.290 "name": "nvme_second", 00:20:32.290 "trtype": "tcp", 00:20:32.290 "traddr": "10.0.0.2", 00:20:32.290 "adrfam": "ipv4", 00:20:32.290 "trsvcid": "8010", 00:20:32.290 "hostnqn": "nqn.2021-12.io.spdk:test", 00:20:32.290 "wait_for_attach": false, 00:20:32.290 "attach_timeout_ms": 3000 00:20:32.290 } 00:20:32.290 } 00:20:32.290 Got JSON-RPC error response 00:20:32.290 GoRPCClient: error on JSON-RPC call 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@154 -- # trap - SIGINT SIGTERM EXIT 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@156 -- # kill 94126 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # nvmftestfini 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:32.290 rmmod nvme_tcp 00:20:32.290 rmmod nvme_fabrics 00:20:32.290 rmmod nvme_keyring 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 94089 ']' 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 94089 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 94089 ']' 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 94089 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.290 09:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94089 00:20:32.290 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:32.290 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:32.290 killing process with pid 94089 00:20:32.290 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94089' 00:20:32.290 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 94089 00:20:32.290 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 94089 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # continue 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:20:32.549 00:20:32.549 real 0m10.308s 00:20:32.549 user 0m20.075s 00:20:32.549 sys 0m1.725s 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:32.549 ************************************ 00:20:32.549 END TEST nvmf_host_discovery 00:20:32.549 ************************************ 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@31 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.549 ************************************ 00:20:32.549 START TEST nvmf_discovery_remove_ifc 00:20:32.549 ************************************ 00:20:32.549 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:32.809 * Looking for test storage... 00:20:32.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.809 --rc genhtml_branch_coverage=1 00:20:32.809 --rc genhtml_function_coverage=1 00:20:32.809 --rc genhtml_legend=1 00:20:32.809 --rc geninfo_all_blocks=1 00:20:32.809 --rc geninfo_unexecuted_blocks=1 00:20:32.809 00:20:32.809 ' 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.809 --rc genhtml_branch_coverage=1 00:20:32.809 --rc genhtml_function_coverage=1 00:20:32.809 --rc genhtml_legend=1 00:20:32.809 --rc geninfo_all_blocks=1 00:20:32.809 --rc geninfo_unexecuted_blocks=1 00:20:32.809 00:20:32.809 ' 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.809 --rc genhtml_branch_coverage=1 00:20:32.809 --rc genhtml_function_coverage=1 00:20:32.809 --rc genhtml_legend=1 00:20:32.809 --rc geninfo_all_blocks=1 00:20:32.809 --rc geninfo_unexecuted_blocks=1 00:20:32.809 00:20:32.809 ' 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:32.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:32.809 --rc genhtml_branch_coverage=1 00:20:32.809 --rc genhtml_function_coverage=1 00:20:32.809 --rc genhtml_legend=1 00:20:32.809 --rc geninfo_all_blocks=1 00:20:32.809 --rc geninfo_unexecuted_blocks=1 00:20:32.809 00:20:32.809 ' 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:32.809 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:32.810 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@223 -- # create_target_ns 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:32.810 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target0 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:32.811 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:33.071 10.0.0.1 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:33.071 10.0.0.2 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:33.071 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@151 -- # set_up target1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772163 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:33.072 10.0.0.3 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772164 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:33.072 10.0.0.4 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:20:33.072 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:33.073 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:33.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:20:33.333 00:20:33.333 --- 10.0.0.1 ping statistics --- 00:20:33.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.333 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:33.333 09:14:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:33.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:20:33.333 00:20:33.333 --- 10.0.0.2 ping statistics --- 00:20:33.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.333 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:33.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:33.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:20:33.333 00:20:33.333 --- 10.0.0.3 ping statistics --- 00:20:33.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.333 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:33.333 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:33.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:33.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:20:33.334 00:20:33.334 --- 10.0.0.4 ping statistics --- 00:20:33.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.334 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # return 0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo initiator1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target0 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo target1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=target1 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.334 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=94643 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 94643 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 94643 ']' 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.335 09:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.335 [2024-11-20 09:14:12.218313] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:33.335 [2024-11-20 09:14:12.218432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.593 [2024-11-20 09:14:12.361188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.593 [2024-11-20 09:14:12.410640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.593 [2024-11-20 09:14:12.410703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.593 [2024-11-20 09:14:12.410729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.593 [2024-11-20 09:14:12.410738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.593 [2024-11-20 09:14:12.410745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.594 [2024-11-20 09:14:12.411177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.538 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.538 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:34.538 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:34.538 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.539 [2024-11-20 09:14:13.313947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.539 [2024-11-20 09:14:13.322062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:34.539 null0 00:20:34.539 [2024-11-20 09:14:13.354009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=94693 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 94693 /tmp/host.sock 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 94693 ']' 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.539 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.539 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.539 [2024-11-20 09:14:13.432183] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:34.539 [2024-11-20 09:14:13.432724] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94693 ] 00:20:34.812 [2024-11-20 09:14:13.582912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.812 [2024-11-20 09:14:13.641029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.812 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.070 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.070 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:35.070 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.070 09:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.007 [2024-11-20 09:14:14.827537] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:36.007 [2024-11-20 09:14:14.827597] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:36.007 [2024-11-20 09:14:14.827637] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:36.007 [2024-11-20 09:14:14.913752] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:20:36.266 [2024-11-20 09:14:14.968173] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:20:36.266 [2024-11-20 09:14:14.969042] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc9b150:1 started. 00:20:36.266 [2024-11-20 09:14:14.971020] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:36.266 [2024-11-20 09:14:14.971103] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:36.266 [2024-11-20 09:14:14.971132] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:36.266 [2024-11-20 09:14:14.971148] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:20:36.266 [2024-11-20 09:14:14.971174] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:36.266 [2024-11-20 09:14:14.975736] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc9b150 was disconnected and freed. delete nvme_qpair. 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:36.266 09:14:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev target0 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set target0 down 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:20:36.266 09:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:37.203 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:37.203 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.203 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.203 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.203 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:37.203 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:37.203 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:37.462 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.462 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:20:37.462 09:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:20:38.399 09:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:39.335 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:39.335 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.335 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.335 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:39.335 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.335 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:39.335 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:39.593 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.593 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:20:39.593 09:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:40.528 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:40.528 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:40.528 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:40.528 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:40.528 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.528 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:40.529 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:40.529 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.529 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:20:40.529 09:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:41.461 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.720 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:20:41.720 09:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:41.720 [2024-11-20 09:14:20.399621] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:41.720 [2024-11-20 09:14:20.399716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.720 [2024-11-20 09:14:20.399733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.720 [2024-11-20 09:14:20.399746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.720 [2024-11-20 09:14:20.399755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.720 [2024-11-20 09:14:20.399765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.720 [2024-11-20 09:14:20.399785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.720 [2024-11-20 09:14:20.399796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.720 [2024-11-20 09:14:20.399806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.720 [2024-11-20 09:14:20.399816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:41.720 [2024-11-20 09:14:20.399825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:41.720 [2024-11-20 09:14:20.399835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78520 is same with the state(6) to be set 00:20:41.720 [2024-11-20 09:14:20.409615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78520 (9): Bad file descriptor 00:20:41.720 [2024-11-20 09:14:20.419634] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:41.720 [2024-11-20 09:14:20.419674] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:41.720 [2024-11-20 09:14:20.419685] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:41.720 [2024-11-20 09:14:20.419692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:41.720 [2024-11-20 09:14:20.419730] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:42.654 [2024-11-20 09:14:21.442894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:20:42.654 [2024-11-20 09:14:21.443026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc78520 with addr=10.0.0.2, port=4420 00:20:42.654 [2024-11-20 09:14:21.443062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc78520 is same with the state(6) to be set 00:20:42.654 [2024-11-20 09:14:21.443129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc78520 (9): Bad file descriptor 00:20:42.654 [2024-11-20 09:14:21.444055] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:42.654 [2024-11-20 09:14:21.444143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:42.654 [2024-11-20 09:14:21.444170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:42.654 [2024-11-20 09:14:21.444193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:42.654 [2024-11-20 09:14:21.444214] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:42.654 [2024-11-20 09:14:21.444229] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:42.654 [2024-11-20 09:14:21.444241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:42.654 [2024-11-20 09:14:21.444263] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:42.654 [2024-11-20 09:14:21.444275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:20:42.654 09:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:43.590 [2024-11-20 09:14:22.444356] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:43.590 [2024-11-20 09:14:22.444407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:43.590 [2024-11-20 09:14:22.444436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:43.590 [2024-11-20 09:14:22.444447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:43.590 [2024-11-20 09:14:22.444469] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:43.590 [2024-11-20 09:14:22.444480] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:43.590 [2024-11-20 09:14:22.444487] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:43.590 [2024-11-20 09:14:22.444492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:43.590 [2024-11-20 09:14:22.444529] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:20:43.590 [2024-11-20 09:14:22.444580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.590 [2024-11-20 09:14:22.444596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.590 [2024-11-20 09:14:22.444610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.590 [2024-11-20 09:14:22.444619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.590 [2024-11-20 09:14:22.444629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.590 [2024-11-20 09:14:22.444639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.590 [2024-11-20 09:14:22.444649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.590 [2024-11-20 09:14:22.444658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.590 [2024-11-20 09:14:22.444668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:43.590 [2024-11-20 09:14:22.444678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:43.590 [2024-11-20 09:14:22.444687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:43.590 [2024-11-20 09:14:22.444731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc052d0 (9): Bad file descriptor 00:20:43.590 [2024-11-20 09:14:22.445733] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:43.590 [2024-11-20 09:14:22.445784] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:43.590 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.848 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:20:43.848 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:43.849 09:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.782 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:44.783 09:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:20:45.717 [2024-11-20 09:14:24.449099] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:45.717 [2024-11-20 09:14:24.449139] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:45.717 [2024-11-20 09:14:24.449160] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:45.717 [2024-11-20 09:14:24.535215] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:20:45.717 [2024-11-20 09:14:24.589664] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:20:45.717 [2024-11-20 09:14:24.590336] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xc72100:1 started. 00:20:45.717 [2024-11-20 09:14:24.591722] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:45.717 [2024-11-20 09:14:24.591784] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:45.717 [2024-11-20 09:14:24.591809] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:45.717 [2024-11-20 09:14:24.591826] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:20:45.717 [2024-11-20 09:14:24.591836] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:45.717 [2024-11-20 09:14:24.597591] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xc72100 was disconnected and freed. delete nvme_qpair. 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 94693 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 94693 ']' 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 94693 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94693 00:20:45.975 killing process with pid 94693 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94693' 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 94693 00:20:45.975 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 94693 00:20:46.233 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:20:46.233 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:46.233 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:20:46.233 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:46.233 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:20:46.233 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:46.233 09:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:46.233 rmmod nvme_tcp 00:20:46.233 rmmod nvme_fabrics 00:20:46.233 rmmod nvme_keyring 00:20:46.233 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 94643 ']' 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 94643 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 94643 ']' 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 94643 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94643 00:20:46.234 killing process with pid 94643 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94643' 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 94643 00:20:46.234 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 94643 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:46.495 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:46.496 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # continue 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:20:46.753 00:20:46.753 real 0m13.991s 00:20:46.753 user 0m24.384s 00:20:46.753 sys 0m1.751s 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:46.753 ************************************ 00:20:46.753 END TEST nvmf_discovery_remove_ifc 00:20:46.753 ************************************ 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.753 ************************************ 00:20:46.753 START TEST nvmf_multicontroller 00:20:46.753 ************************************ 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:46.753 * Looking for test storage... 00:20:46.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.753 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.013 --rc genhtml_branch_coverage=1 00:20:47.013 --rc genhtml_function_coverage=1 00:20:47.013 --rc genhtml_legend=1 00:20:47.013 --rc geninfo_all_blocks=1 00:20:47.013 --rc geninfo_unexecuted_blocks=1 00:20:47.013 00:20:47.013 ' 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.013 --rc genhtml_branch_coverage=1 00:20:47.013 --rc genhtml_function_coverage=1 00:20:47.013 --rc genhtml_legend=1 00:20:47.013 --rc geninfo_all_blocks=1 00:20:47.013 --rc geninfo_unexecuted_blocks=1 00:20:47.013 00:20:47.013 ' 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.013 --rc genhtml_branch_coverage=1 00:20:47.013 --rc genhtml_function_coverage=1 00:20:47.013 --rc genhtml_legend=1 00:20:47.013 --rc geninfo_all_blocks=1 00:20:47.013 --rc geninfo_unexecuted_blocks=1 00:20:47.013 00:20:47.013 ' 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:47.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.013 --rc genhtml_branch_coverage=1 00:20:47.013 --rc genhtml_function_coverage=1 00:20:47.013 --rc genhtml_legend=1 00:20:47.013 --rc geninfo_all_blocks=1 00:20:47.013 --rc geninfo_unexecuted_blocks=1 00:20:47.013 00:20:47.013 ' 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.013 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:47.014 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # nvmftestinit 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@223 -- # create_target_ns 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # return 0 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up target0 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:47.014 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:47.015 10.0.0.1 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:47.015 10.0.0.2 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@151 -- # set_up target1 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:47.015 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772163 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:47.275 10.0.0.3 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.275 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772164 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:47.276 10.0.0.4 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:47.276 09:14:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:47.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:20:47.276 00:20:47.276 --- 10.0.0.1 ping statistics --- 00:20:47.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.276 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target0 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:47.276 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:47.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:20:47.277 00:20:47.277 --- 10.0.0.2 ping statistics --- 00:20:47.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.277 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:47.277 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:47.277 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:20:47.277 00:20:47.277 --- 10.0.0.3 ping statistics --- 00:20:47.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.277 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:47.277 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:47.277 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:20:47.277 00:20:47.277 --- 10.0.0.4 ping statistics --- 00:20:47.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.277 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # return 0 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator0 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:47.277 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo initiator1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target0 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target0 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo target1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=target1 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:47.278 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # nvmfappstart -m 0xE 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=95155 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 95155 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 95155 ']' 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.536 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.537 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.537 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:47.537 [2024-11-20 09:14:26.282036] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:47.537 [2024-11-20 09:14:26.282130] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.537 [2024-11-20 09:14:26.431594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:47.794 [2024-11-20 09:14:26.505909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.794 [2024-11-20 09:14:26.506018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.794 [2024-11-20 09:14:26.506041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.794 [2024-11-20 09:14:26.506056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.794 [2024-11-20 09:14:26.506068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.794 [2024-11-20 09:14:26.507494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.794 [2024-11-20 09:14:26.507646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.794 [2024-11-20 09:14:26.507659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:47.794 [2024-11-20 09:14:26.695224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.794 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 Malloc0 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 [2024-11-20 09:14:26.760729] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 [2024-11-20 09:14:26.768645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 Malloc1 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@32 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@39 -- # bdevperf_pid=95189 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@42 -- # waitforlisten 95189 /var/tmp/bdevperf.sock 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 95189 ']' 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.053 09:14:26 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@45 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.621 NVMe0n1 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # grep -c NVMe 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.621 1 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@55 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.621 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.621 2024/11/20 09:14:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:48.621 request: 00:20:48.621 { 00:20:48.621 "method": "bdev_nvme_attach_controller", 00:20:48.621 "params": { 00:20:48.621 "name": "NVMe0", 00:20:48.621 "trtype": "tcp", 00:20:48.621 "traddr": "10.0.0.2", 00:20:48.621 "adrfam": "ipv4", 00:20:48.621 "trsvcid": "4420", 00:20:48.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.621 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:48.621 "hostaddr": "10.0.0.1", 00:20:48.621 "prchk_reftag": false, 00:20:48.621 "prchk_guard": false, 00:20:48.621 "hdgst": false, 00:20:48.621 "ddgst": false, 00:20:48.621 "allow_unrecognized_csi": false 00:20:48.621 } 00:20:48.621 } 00:20:48.621 Got JSON-RPC error response 00:20:48.622 GoRPCClient: error on JSON-RPC call 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.622 2024/11/20 09:14:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:48.622 request: 00:20:48.622 { 00:20:48.622 "method": "bdev_nvme_attach_controller", 00:20:48.622 "params": { 00:20:48.622 "name": "NVMe0", 00:20:48.622 "trtype": "tcp", 00:20:48.622 "traddr": "10.0.0.2", 00:20:48.622 "adrfam": "ipv4", 00:20:48.622 "trsvcid": "4420", 00:20:48.622 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:48.622 "hostaddr": "10.0.0.1", 00:20:48.622 "prchk_reftag": false, 00:20:48.622 "prchk_guard": false, 00:20:48.622 "hdgst": false, 00:20:48.622 "ddgst": false, 00:20:48.622 "allow_unrecognized_csi": false 00:20:48.622 } 00:20:48.622 } 00:20:48.622 Got JSON-RPC error response 00:20:48.622 GoRPCClient: error on JSON-RPC call 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@64 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.622 2024/11/20 09:14:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:48.622 request: 00:20:48.622 { 00:20:48.622 "method": "bdev_nvme_attach_controller", 00:20:48.622 "params": { 00:20:48.622 "name": "NVMe0", 00:20:48.622 "trtype": "tcp", 00:20:48.622 "traddr": "10.0.0.2", 00:20:48.622 "adrfam": "ipv4", 00:20:48.622 "trsvcid": "4420", 00:20:48.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.622 "hostaddr": "10.0.0.1", 00:20:48.622 "prchk_reftag": false, 00:20:48.622 "prchk_guard": false, 00:20:48.622 "hdgst": false, 00:20:48.622 "ddgst": false, 00:20:48.622 "multipath": "disable", 00:20:48.622 "allow_unrecognized_csi": false 00:20:48.622 } 00:20:48.622 } 00:20:48.622 Got JSON-RPC error response 00:20:48.622 GoRPCClient: error on JSON-RPC call 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.622 2024/11/20 09:14:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:48.622 request: 00:20:48.622 { 00:20:48.622 "method": "bdev_nvme_attach_controller", 00:20:48.622 "params": { 00:20:48.622 "name": "NVMe0", 00:20:48.622 "trtype": "tcp", 00:20:48.622 "traddr": "10.0.0.2", 00:20:48.622 "adrfam": "ipv4", 00:20:48.622 "trsvcid": "4420", 00:20:48.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.622 "hostaddr": "10.0.0.1", 00:20:48.622 "prchk_reftag": false, 00:20:48.622 "prchk_guard": false, 00:20:48.622 "hdgst": false, 00:20:48.622 "ddgst": false, 00:20:48.622 "multipath": "failover", 00:20:48.622 "allow_unrecognized_csi": false 00:20:48.622 } 00:20:48.622 } 00:20:48.622 Got JSON-RPC error response 00:20:48.622 GoRPCClient: error on JSON-RPC call 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.622 NVMe0n1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@78 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@82 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.622 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.880 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # grep -c NVMe 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # '[' 2 '!=' 2 ']' 00:20:48.880 09:14:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:49.813 { 00:20:49.813 "results": [ 00:20:49.813 { 00:20:49.813 "job": "NVMe0n1", 00:20:49.813 "core_mask": "0x1", 00:20:49.813 "workload": "write", 00:20:49.813 "status": "finished", 00:20:49.813 "queue_depth": 128, 00:20:49.813 "io_size": 4096, 00:20:49.813 "runtime": 1.006597, 00:20:49.813 "iops": 19415.913220484465, 00:20:49.813 "mibps": 75.84341101751744, 00:20:49.813 "io_failed": 0, 00:20:49.813 "io_timeout": 0, 00:20:49.813 "avg_latency_us": 6575.690705168757, 00:20:49.813 "min_latency_us": 3202.327272727273, 00:20:49.813 "max_latency_us": 15013.701818181818 00:20:49.813 } 00:20:49.813 ], 00:20:49.813 "core_count": 1 00:20:49.813 } 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@93 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # [[ -n 10.0.0.3 ]] 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@97 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.813 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.071 nvme1n1 00:20:50.071 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.071 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@101 -- # jq -r '.[].peer_address.traddr' 00:20:50.071 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@101 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:20:50.071 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@101 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.3 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.072 nvme1n1 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # jq -r '.[].peer_address.traddr' 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.072 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.330 09:14:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # [[ 10.0.0.3 == \1\0\.\0\.\0\.\3 ]] 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@111 -- # killprocess 95189 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 95189 ']' 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 95189 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95189 00:20:50.330 killing process with pid 95189 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95189' 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 95189 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 95189 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@114 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.330 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:20:50.589 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:50.589 [2024-11-20 09:14:26.892060] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:50.589 [2024-11-20 09:14:26.892173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95189 ] 00:20:50.589 [2024-11-20 09:14:27.041998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.589 [2024-11-20 09:14:27.108861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.589 [2024-11-20 09:14:27.546864] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 887256b4-95ca-46aa-9157-4a391ccf827f already exists 00:20:50.589 [2024-11-20 09:14:27.546933] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:887256b4-95ca-46aa-9157-4a391ccf827f alias for bdev NVMe1n1 00:20:50.589 [2024-11-20 09:14:27.546952] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:50.589 Running I/O for 1 seconds... 00:20:50.589 19352.00 IOPS, 75.59 MiB/s 00:20:50.589 Latency(us) 00:20:50.589 [2024-11-20T09:14:29.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.589 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:50.589 NVMe0n1 : 1.01 19415.91 75.84 0.00 0.00 6575.69 3202.33 15013.70 00:20:50.589 [2024-11-20T09:14:29.508Z] =================================================================================================================== 00:20:50.589 [2024-11-20T09:14:29.508Z] Total : 19415.91 75.84 0.00 0.00 6575.69 3202.33 15013.70 00:20:50.589 Received shutdown signal, test time was about 1.000000 seconds 00:20:50.589 00:20:50.589 Latency(us) 00:20:50.589 [2024-11-20T09:14:29.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.589 [2024-11-20T09:14:29.508Z] =================================================================================================================== 00:20:50.589 [2024-11-20T09:14:29.508Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.589 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # nvmftestfini 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:50.589 rmmod nvme_tcp 00:20:50.589 rmmod nvme_fabrics 00:20:50.589 rmmod nvme_keyring 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 95155 ']' 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 95155 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 95155 ']' 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 95155 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95155 00:20:50.589 killing process with pid 95155 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95155' 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 95155 00:20:50.589 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 95155 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@254 -- # local dev 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:20:50.848 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:20:51.106 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # continue 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # continue 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@274 -- # iptr 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-save 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-restore 00:20:51.107 00:20:51.107 real 0m4.347s 00:20:51.107 user 0m12.238s 00:20:51.107 sys 0m1.249s 00:20:51.107 ************************************ 00:20:51.107 END TEST nvmf_multicontroller 00:20:51.107 ************************************ 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@35 -- # [[ tcp == \r\d\m\a ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@39 -- # [[ 1 -eq 1 ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@39 -- # [[ tcp == \t\c\p ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.107 ************************************ 00:20:51.107 START TEST nvmf_mdns_discovery 00:20:51.107 ************************************ 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:51.107 * Looking for test storage... 00:20:51.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.107 09:14:29 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.367 --rc genhtml_branch_coverage=1 00:20:51.367 --rc genhtml_function_coverage=1 00:20:51.367 --rc genhtml_legend=1 00:20:51.367 --rc geninfo_all_blocks=1 00:20:51.367 --rc geninfo_unexecuted_blocks=1 00:20:51.367 00:20:51.367 ' 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.367 --rc genhtml_branch_coverage=1 00:20:51.367 --rc genhtml_function_coverage=1 00:20:51.367 --rc genhtml_legend=1 00:20:51.367 --rc geninfo_all_blocks=1 00:20:51.367 --rc geninfo_unexecuted_blocks=1 00:20:51.367 00:20:51.367 ' 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.367 --rc genhtml_branch_coverage=1 00:20:51.367 --rc genhtml_function_coverage=1 00:20:51.367 --rc genhtml_legend=1 00:20:51.367 --rc geninfo_all_blocks=1 00:20:51.367 --rc geninfo_unexecuted_blocks=1 00:20:51.367 00:20:51.367 ' 00:20:51.367 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.367 --rc genhtml_branch_coverage=1 00:20:51.367 --rc genhtml_function_coverage=1 00:20:51.368 --rc genhtml_legend=1 00:20:51.368 --rc geninfo_all_blocks=1 00:20:51.368 --rc geninfo_unexecuted_blocks=1 00:20:51.368 00:20:51.368 ' 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@50 -- # : 0 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:51.368 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@280 -- # nvmf_veth_init 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@223 -- # create_target_ns 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.368 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@224 -- # create_main_bridge 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@105 -- # delete_main_bridge 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@121 -- # return 0 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # ips=() 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up initiator0 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up target0 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target0 up 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up target0_br 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # add_to_ns target0 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:51.369 10.0.0.1 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:20:51.369 10.0.0.2 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@66 -- # set_up initiator0 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.369 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up target0_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # ips=() 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up initiator1 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:20:51.370 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@151 -- # set_up target1 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target1 up 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@152 -- # set_up target1_br 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@61 -- # add_to_ns target1 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772163 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:20:51.630 10.0.0.3 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@11 -- # local val=167772164 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:20:51.630 10.0.0.4 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@66 -- # set_up initiator1 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:20:51.630 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@129 -- # set_up target1_br 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@38 -- # ping_ips 2 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:51.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:20:51.631 00:20:51.631 --- 10.0.0.1 ping statistics --- 00:20:51.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.631 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:51.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:51.631 00:20:51.631 --- 10.0.0.2 ping statistics --- 00:20:51.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.631 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:20:51.631 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:51.631 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:20:51.631 00:20:51.631 --- 10.0.0.3 ping statistics --- 00:20:51.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.631 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target1 00:20:51.631 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:20:51.632 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:51.632 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:20:51.632 00:20:51.632 --- 10.0.0.4 ping statistics --- 00:20:51.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.632 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@281 -- # return 0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo initiator1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=initiator1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target0 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@101 -- # echo target1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@159 -- # dev=target1 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:51.632 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@328 -- # nvmfpid=95483 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@329 -- # waitforlisten 95483 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95483 ']' 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.891 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.891 [2024-11-20 09:14:30.622044] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:51.891 [2024-11-20 09:14:30.622892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.891 [2024-11-20 09:14:30.766080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.149 [2024-11-20 09:14:30.844159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.149 [2024-11-20 09:14:30.844246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.149 [2024-11-20 09:14:30.844280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.150 [2024-11-20 09:14:30.844293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.150 [2024-11-20 09:14:30.844304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.150 [2024-11-20 09:14:30.844768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.150 09:14:30 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.150 [2024-11-20 09:14:31.061076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.150 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.408 [2024-11-20 09:14:31.069213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.408 null0 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.408 null1 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.408 null2 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.408 null3 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=95524 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 95524 /tmp/host.sock 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 95524 ']' 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:52.408 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:52.408 09:14:31 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:52.408 [2024-11-20 09:14:31.178715] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:20:52.408 [2024-11-20 09:14:31.179000] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95524 ] 00:20:52.667 [2024-11-20 09:14:31.333164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:52.667 [2024-11-20 09:14:31.400410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=95550 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_ns_spdk avahi-daemon -f /dev/fd/63 00:20:53.600 09:14:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=target0,target1\nuse-ipv4=yes\nuse-ipv6=no' 00:20:53.600 Process 1067 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:53.600 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:53.600 Successfully dropped root privileges. 00:20:53.600 avahi-daemon 0.8 starting up. 00:20:53.600 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:54.605 Successfully called chroot(). 00:20:54.605 Successfully dropped remaining capabilities. 00:20:54.605 No service file found in /etc/avahi/services. 00:20:54.605 Joining mDNS multicast group on interface target1.IPv4 with address 10.0.0.4. 00:20:54.605 New relevant interface target1.IPv4 for mDNS. 00:20:54.605 Joining mDNS multicast group on interface target0.IPv4 with address 10.0.0.2. 00:20:54.605 New relevant interface target0.IPv4 for mDNS. 00:20:54.605 Network interface enumeration completed. 00:20:54.605 Registering new address record for fe80::d07d:ffff:fe85:cc2b on target1.*. 00:20:54.605 Registering new address record for 10.0.0.4 on target1.IPv4. 00:20:54.605 Registering new address record for fe80::9cc5:c2ff:fecc:c8fe on target0.*. 00:20:54.605 Registering new address record for 10.0.0.2 on target0.IPv4. 00:20:54.605 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 4173149241. 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:54.605 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.863 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:54.864 [2024-11-20 09:14:33.654038] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 [2024-11-20 09:14:33.709895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.864 09:14:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:20:55.798 [2024-11-20 09:14:34.554044] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:56.057 [2024-11-20 09:14:34.954079] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:56.057 [2024-11-20 09:14:34.954354] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:20:56.057 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:56.057 cookie is 0 00:20:56.057 is_local: 1 00:20:56.057 our_own: 0 00:20:56.057 wide_area: 0 00:20:56.057 multicast: 1 00:20:56.057 cached: 1 00:20:56.315 [2024-11-20 09:14:35.054064] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:56.315 [2024-11-20 09:14:35.054341] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:20:56.315 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:56.315 cookie is 0 00:20:56.315 is_local: 1 00:20:56.315 our_own: 0 00:20:56.315 wide_area: 0 00:20:56.315 multicast: 1 00:20:56.315 cached: 1 00:20:57.251 [2024-11-20 09:14:35.955818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.251 [2024-11-20 09:14:35.956039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233d710 with addr=10.0.0.4, port=8009 00:20:57.251 [2024-11-20 09:14:35.956215] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:57.251 [2024-11-20 09:14:35.956358] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:57.251 [2024-11-20 09:14:35.956506] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:57.251 [2024-11-20 09:14:36.066000] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:20:57.251 [2024-11-20 09:14:36.066216] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:20:57.251 [2024-11-20 09:14:36.066279] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:20:57.251 [2024-11-20 09:14:36.152157] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:20:57.509 [2024-11-20 09:14:36.206966] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:20:57.509 [2024-11-20 09:14:36.208129] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2372e70:1 started. 00:20:57.509 [2024-11-20 09:14:36.210164] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:20:57.509 [2024-11-20 09:14:36.210322] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:20:57.509 [2024-11-20 09:14:36.214481] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2372e70 was disconnected and freed. delete nvme_qpair. 00:20:58.076 [2024-11-20 09:14:36.955782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:58.076 [2024-11-20 09:14:36.956060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24abb30 with addr=10.0.0.4, port=8009 00:20:58.076 [2024-11-20 09:14:36.956232] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:58.076 [2024-11-20 09:14:36.956349] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:58.076 [2024-11-20 09:14:36.956395] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:59.450 [2024-11-20 09:14:37.955771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:59.451 [2024-11-20 09:14:37.956066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233d0d0 with addr=10.0.0.4, port=8009 00:20:59.451 [2024-11-20 09:14:37.956224] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:59.451 [2024-11-20 09:14:37.956336] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:59.451 [2024-11-20 09:14:37.956382] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:00.016 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:00.016 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:00.016 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.016 [2024-11-20 09:14:38.796361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:21:00.016 [2024-11-20 09:14:38.798347] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:00.016 [2024-11-20 09:14:38.798518] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.016 [2024-11-20 09:14:38.804256] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:21:00.016 [2024-11-20 09:14:38.805334] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.016 09:14:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:21:00.274 [2024-11-20 09:14:38.936459] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:00.274 [2024-11-20 09:14:38.936785] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:00.274 [2024-11-20 09:14:38.966296] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:00.274 [2024-11-20 09:14:38.966500] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:00.274 [2024-11-20 09:14:38.966560] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:00.274 [2024-11-20 09:14:39.023831] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:00.274 [2024-11-20 09:14:39.052424] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:21:00.274 [2024-11-20 09:14:39.107061] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:21:00.274 [2024-11-20 09:14:39.108030] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x236fd90:1 started. 00:21:00.274 [2024-11-20 09:14:39.109563] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:00.274 [2024-11-20 09:14:39.109590] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:00.274 [2024-11-20 09:14:39.114916] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x236fd90 was disconnected and freed. delete nvme_qpair. 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:01.206 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:01.206 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:01.206 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:01.206 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.206 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.206 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.206 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:01.206 09:14:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.206 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.207 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:01.207 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:01.207 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:01.207 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.207 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.465 [2024-11-20 09:14:40.237653] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2377c50:1 started. 00:21:01.465 [2024-11-20 09:14:40.241703] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x24a0d40:1 started. 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.465 09:14:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:21:01.465 [2024-11-20 09:14:40.245983] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2377c50 was disconnected and freed. delete nvme_qpair. 00:21:01.465 [2024-11-20 09:14:40.246408] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x24a0d40 was disconnected and freed. delete nvme_qpair. 00:21:01.465 [2024-11-20 09:14:40.254081] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:01.465 [2024-11-20 09:14:40.254106] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:21:01.465 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.465 cookie is 0 00:21:01.465 is_local: 1 00:21:01.465 our_own: 0 00:21:01.465 wide_area: 0 00:21:01.465 multicast: 1 00:21:01.465 cached: 1 00:21:01.465 [2024-11-20 09:14:40.254120] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:21:01.465 [2024-11-20 09:14:40.354085] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:01.465 [2024-11-20 09:14:40.354133] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:01.465 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:01.465 cookie is 0 00:21:01.465 is_local: 1 00:21:01.465 our_own: 0 00:21:01.465 wide_area: 0 00:21:01.465 multicast: 1 00:21:01.465 cached: 1 00:21:01.465 [2024-11-20 09:14:40.354148] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.399 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.657 [2024-11-20 09:14:41.365720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:02.657 [2024-11-20 09:14:41.366384] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:02.657 [2024-11-20 09:14:41.366426] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:02.657 [2024-11-20 09:14:41.366467] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:02.657 [2024-11-20 09:14:41.366485] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.657 [2024-11-20 09:14:41.373627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:21:02.657 [2024-11-20 09:14:41.374372] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:02.657 [2024-11-20 09:14:41.374427] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.657 09:14:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:21:02.657 [2024-11-20 09:14:41.505489] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:21:02.657 [2024-11-20 09:14:41.506225] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:21:02.657 [2024-11-20 09:14:41.570259] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:21:02.657 [2024-11-20 09:14:41.570336] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:02.657 [2024-11-20 09:14:41.570349] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:02.657 [2024-11-20 09:14:41.570355] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:02.657 [2024-11-20 09:14:41.570377] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:02.657 [2024-11-20 09:14:41.570518] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:21:02.657 [2024-11-20 09:14:41.570545] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:02.657 [2024-11-20 09:14:41.570554] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:02.657 [2024-11-20 09:14:41.570560] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:02.657 [2024-11-20 09:14:41.570576] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:02.915 [2024-11-20 09:14:41.615700] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:02.915 [2024-11-20 09:14:41.615734] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:02.915 [2024-11-20 09:14:41.616694] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:02.915 [2024-11-20 09:14:41.616711] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:03.503 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.762 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.024 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:04.024 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:04.024 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:21:04.024 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:04.024 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.024 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.024 [2024-11-20 09:14:42.691418] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:04.024 [2024-11-20 09:14:42.691469] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:04.024 [2024-11-20 09:14:42.691508] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:04.025 [2024-11-20 09:14:42.691525] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:04.025 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.025 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:04.025 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.025 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.025 [2024-11-20 09:14:42.696852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.696895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.696910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.696920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.696930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.696940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.696950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.696960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.696970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.025 [2024-11-20 09:14:42.699406] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:04.025 [2024-11-20 09:14:42.699628] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:04.025 [2024-11-20 09:14:42.700492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.700523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.700536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.700545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.700556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.700565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.700575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:04.025 [2024-11-20 09:14:42.700589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:04.025 [2024-11-20 09:14:42.700598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.025 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.025 09:14:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:21:04.025 [2024-11-20 09:14:42.706791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.025 [2024-11-20 09:14:42.710458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.025 [2024-11-20 09:14:42.716811] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.025 [2024-11-20 09:14:42.716945] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.025 [2024-11-20 09:14:42.716965] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.716972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.025 [2024-11-20 09:14:42.717014] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.717105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.025 [2024-11-20 09:14:42.717128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.025 [2024-11-20 09:14:42.717140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.025 [2024-11-20 09:14:42.717158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.025 [2024-11-20 09:14:42.717175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.025 [2024-11-20 09:14:42.717195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.025 [2024-11-20 09:14:42.717207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.025 [2024-11-20 09:14:42.717215] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.025 [2024-11-20 09:14:42.717222] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.025 [2024-11-20 09:14:42.717228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.025 [2024-11-20 09:14:42.720466] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.025 [2024-11-20 09:14:42.720601] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.025 [2024-11-20 09:14:42.720613] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.720619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.025 [2024-11-20 09:14:42.720656] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.720720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.025 [2024-11-20 09:14:42.720743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.025 [2024-11-20 09:14:42.720754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.025 [2024-11-20 09:14:42.720828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.025 [2024-11-20 09:14:42.720847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.025 [2024-11-20 09:14:42.720856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.025 [2024-11-20 09:14:42.720865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.025 [2024-11-20 09:14:42.720874] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.025 [2024-11-20 09:14:42.720880] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.025 [2024-11-20 09:14:42.720885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.025 [2024-11-20 09:14:42.727023] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.025 [2024-11-20 09:14:42.727044] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.025 [2024-11-20 09:14:42.727051] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.727056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.025 [2024-11-20 09:14:42.727100] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.727155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.025 [2024-11-20 09:14:42.727175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.025 [2024-11-20 09:14:42.727186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.025 [2024-11-20 09:14:42.727202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.025 [2024-11-20 09:14:42.727217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.025 [2024-11-20 09:14:42.727226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.025 [2024-11-20 09:14:42.727235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.025 [2024-11-20 09:14:42.727244] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.025 [2024-11-20 09:14:42.727250] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.025 [2024-11-20 09:14:42.727255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.025 [2024-11-20 09:14:42.730666] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.025 [2024-11-20 09:14:42.730814] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.025 [2024-11-20 09:14:42.730827] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.730832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.025 [2024-11-20 09:14:42.730867] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.025 [2024-11-20 09:14:42.730951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.025 [2024-11-20 09:14:42.730974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.025 [2024-11-20 09:14:42.730986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.025 [2024-11-20 09:14:42.731003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.025 [2024-11-20 09:14:42.731018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 09:14:42.731027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 09:14:42.731037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.026 [2024-11-20 09:14:42.731045] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.026 [2024-11-20 09:14:42.731052] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.026 [2024-11-20 09:14:42.731057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 09:14:42.737093] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.026 [2024-11-20 09:14:42.737115] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.026 [2024-11-20 09:14:42.737121] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.737126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.026 [2024-11-20 09:14:42.737155] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.737207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 09:14:42.737226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.026 [2024-11-20 09:14:42.737237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 09:14:42.737253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.026 [2024-11-20 09:14:42.737270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 09:14:42.737279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 09:14:42.737288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.026 [2024-11-20 09:14:42.737296] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.026 [2024-11-20 09:14:42.737302] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.026 [2024-11-20 09:14:42.737307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 09:14:42.740876] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.026 [2024-11-20 09:14:42.740897] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.026 [2024-11-20 09:14:42.740903] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.740908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.026 [2024-11-20 09:14:42.740933] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.741001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 09:14:42.741021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.026 [2024-11-20 09:14:42.741031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 09:14:42.741046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.026 [2024-11-20 09:14:42.741060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 09:14:42.741069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 09:14:42.741078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.026 [2024-11-20 09:14:42.741086] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.026 [2024-11-20 09:14:42.741092] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.026 [2024-11-20 09:14:42.741096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 09:14:42.747166] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.026 [2024-11-20 09:14:42.747194] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.026 [2024-11-20 09:14:42.747200] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.747206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.026 [2024-11-20 09:14:42.747253] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.747326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 09:14:42.747350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.026 [2024-11-20 09:14:42.747361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 09:14:42.747378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.026 [2024-11-20 09:14:42.747393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 09:14:42.747402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 09:14:42.747412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.026 [2024-11-20 09:14:42.747421] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.026 [2024-11-20 09:14:42.747427] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.026 [2024-11-20 09:14:42.747432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 09:14:42.750943] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.026 [2024-11-20 09:14:42.751099] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.026 [2024-11-20 09:14:42.751111] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.751117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.026 [2024-11-20 09:14:42.751153] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.751216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 09:14:42.751237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.026 [2024-11-20 09:14:42.751249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 09:14:42.751266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.026 [2024-11-20 09:14:42.751281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 09:14:42.751290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 09:14:42.751300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.026 [2024-11-20 09:14:42.751308] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.026 [2024-11-20 09:14:42.751314] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.026 [2024-11-20 09:14:42.751319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 09:14:42.757246] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.026 [2024-11-20 09:14:42.757269] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.026 [2024-11-20 09:14:42.757275] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.757280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.026 [2024-11-20 09:14:42.757310] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.757363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 09:14:42.757383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.026 [2024-11-20 09:14:42.757394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 09:14:42.757426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.026 [2024-11-20 09:14:42.757441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.026 [2024-11-20 09:14:42.757450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.026 [2024-11-20 09:14:42.757459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.026 [2024-11-20 09:14:42.757467] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.026 [2024-11-20 09:14:42.757473] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.026 [2024-11-20 09:14:42.757478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.026 [2024-11-20 09:14:42.761163] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.026 [2024-11-20 09:14:42.761327] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.026 [2024-11-20 09:14:42.761339] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.761345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.026 [2024-11-20 09:14:42.761379] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.026 [2024-11-20 09:14:42.761441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.026 [2024-11-20 09:14:42.761462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.026 [2024-11-20 09:14:42.761473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.026 [2024-11-20 09:14:42.761490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.027 [2024-11-20 09:14:42.761505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.027 [2024-11-20 09:14:42.761514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.027 [2024-11-20 09:14:42.761524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.027 [2024-11-20 09:14:42.761532] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.027 [2024-11-20 09:14:42.761538] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.027 [2024-11-20 09:14:42.761543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.027 [2024-11-20 09:14:42.767319] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.027 [2024-11-20 09:14:42.767341] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.027 [2024-11-20 09:14:42.767347] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.767352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.027 [2024-11-20 09:14:42.767397] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.767451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.027 [2024-11-20 09:14:42.767471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.027 [2024-11-20 09:14:42.767482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.027 [2024-11-20 09:14:42.767498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.027 [2024-11-20 09:14:42.767512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.027 [2024-11-20 09:14:42.767521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.027 [2024-11-20 09:14:42.767531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.027 [2024-11-20 09:14:42.767539] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.027 [2024-11-20 09:14:42.767545] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.027 [2024-11-20 09:14:42.767550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.027 [2024-11-20 09:14:42.771388] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.027 [2024-11-20 09:14:42.771535] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.027 [2024-11-20 09:14:42.771548] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.771553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.027 [2024-11-20 09:14:42.771588] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.771671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.027 [2024-11-20 09:14:42.771699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.027 [2024-11-20 09:14:42.771710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.027 [2024-11-20 09:14:42.771727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.027 [2024-11-20 09:14:42.771742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.027 [2024-11-20 09:14:42.771751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.027 [2024-11-20 09:14:42.771774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.027 [2024-11-20 09:14:42.771784] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.027 [2024-11-20 09:14:42.771790] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.027 [2024-11-20 09:14:42.771795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.027 [2024-11-20 09:14:42.777391] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.027 [2024-11-20 09:14:42.777427] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.027 [2024-11-20 09:14:42.777434] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.777439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.027 [2024-11-20 09:14:42.777469] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.777521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.027 [2024-11-20 09:14:42.777541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.027 [2024-11-20 09:14:42.777552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.027 [2024-11-20 09:14:42.777567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.027 [2024-11-20 09:14:42.777598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.027 [2024-11-20 09:14:42.777609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.027 [2024-11-20 09:14:42.777619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.027 [2024-11-20 09:14:42.777627] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.027 [2024-11-20 09:14:42.777633] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.027 [2024-11-20 09:14:42.777638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.027 [2024-11-20 09:14:42.781598] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.027 [2024-11-20 09:14:42.781621] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.027 [2024-11-20 09:14:42.781627] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.781632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.027 [2024-11-20 09:14:42.781659] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.781713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.027 [2024-11-20 09:14:42.781732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.027 [2024-11-20 09:14:42.781743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.027 [2024-11-20 09:14:42.781771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.027 [2024-11-20 09:14:42.781788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.027 [2024-11-20 09:14:42.781797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.027 [2024-11-20 09:14:42.781806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.027 [2024-11-20 09:14:42.781814] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.027 [2024-11-20 09:14:42.781820] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.027 [2024-11-20 09:14:42.781826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.027 [2024-11-20 09:14:42.787479] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.027 [2024-11-20 09:14:42.787504] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.027 [2024-11-20 09:14:42.787511] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.787516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.027 [2024-11-20 09:14:42.787547] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.787603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.027 [2024-11-20 09:14:42.787623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.027 [2024-11-20 09:14:42.787634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.027 [2024-11-20 09:14:42.787650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.027 [2024-11-20 09:14:42.787737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.027 [2024-11-20 09:14:42.787773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.027 [2024-11-20 09:14:42.787785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.027 [2024-11-20 09:14:42.787793] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.027 [2024-11-20 09:14:42.787800] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.027 [2024-11-20 09:14:42.787805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.027 [2024-11-20 09:14:42.791676] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.027 [2024-11-20 09:14:42.791700] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.027 [2024-11-20 09:14:42.791707] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.791712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.027 [2024-11-20 09:14:42.791742] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.027 [2024-11-20 09:14:42.791837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.027 [2024-11-20 09:14:42.791858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.027 [2024-11-20 09:14:42.791869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.028 [2024-11-20 09:14:42.791886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.028 [2024-11-20 09:14:42.791901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.028 [2024-11-20 09:14:42.791909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.028 [2024-11-20 09:14:42.791920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.028 [2024-11-20 09:14:42.791929] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.028 [2024-11-20 09:14:42.791935] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.028 [2024-11-20 09:14:42.791939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.028 [2024-11-20 09:14:42.797556] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.028 [2024-11-20 09:14:42.797699] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.028 [2024-11-20 09:14:42.797712] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.797718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.028 [2024-11-20 09:14:42.797754] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.797857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.028 [2024-11-20 09:14:42.797879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.028 [2024-11-20 09:14:42.797891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.028 [2024-11-20 09:14:42.797907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.028 [2024-11-20 09:14:42.797922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.028 [2024-11-20 09:14:42.797932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.028 [2024-11-20 09:14:42.797955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.028 [2024-11-20 09:14:42.797964] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.028 [2024-11-20 09:14:42.797971] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.028 [2024-11-20 09:14:42.797976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.028 [2024-11-20 09:14:42.801751] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.028 [2024-11-20 09:14:42.801781] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.028 [2024-11-20 09:14:42.801788] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.801793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.028 [2024-11-20 09:14:42.801821] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.801874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.028 [2024-11-20 09:14:42.801894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.028 [2024-11-20 09:14:42.801905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.028 [2024-11-20 09:14:42.801921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.028 [2024-11-20 09:14:42.801947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.028 [2024-11-20 09:14:42.801958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.028 [2024-11-20 09:14:42.801967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.028 [2024-11-20 09:14:42.801975] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.028 [2024-11-20 09:14:42.801981] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.028 [2024-11-20 09:14:42.801986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.028 [2024-11-20 09:14:42.807764] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.028 [2024-11-20 09:14:42.807792] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.028 [2024-11-20 09:14:42.807799] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.807804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.028 [2024-11-20 09:14:42.807847] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.807914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.028 [2024-11-20 09:14:42.807934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.028 [2024-11-20 09:14:42.807945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.028 [2024-11-20 09:14:42.807961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.028 [2024-11-20 09:14:42.807975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.028 [2024-11-20 09:14:42.807984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.028 [2024-11-20 09:14:42.807994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.028 [2024-11-20 09:14:42.808002] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.028 [2024-11-20 09:14:42.808008] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.028 [2024-11-20 09:14:42.808013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.028 [2024-11-20 09:14:42.811831] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.028 [2024-11-20 09:14:42.811853] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.028 [2024-11-20 09:14:42.811859] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.811864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.028 [2024-11-20 09:14:42.811894] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.811955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.028 [2024-11-20 09:14:42.811974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.028 [2024-11-20 09:14:42.811985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.028 [2024-11-20 09:14:42.812000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.028 [2024-11-20 09:14:42.812015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.028 [2024-11-20 09:14:42.812024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.028 [2024-11-20 09:14:42.812033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.028 [2024-11-20 09:14:42.812041] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.028 [2024-11-20 09:14:42.812047] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.028 [2024-11-20 09:14:42.812051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.028 [2024-11-20 09:14:42.817856] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.028 [2024-11-20 09:14:42.817876] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.028 [2024-11-20 09:14:42.817882] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.817887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.028 [2024-11-20 09:14:42.817914] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.817988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.028 [2024-11-20 09:14:42.818008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.028 [2024-11-20 09:14:42.818019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.028 [2024-11-20 09:14:42.818035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.028 [2024-11-20 09:14:42.818049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.028 [2024-11-20 09:14:42.818058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.028 [2024-11-20 09:14:42.818067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.028 [2024-11-20 09:14:42.818076] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.028 [2024-11-20 09:14:42.818082] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.028 [2024-11-20 09:14:42.818087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.028 [2024-11-20 09:14:42.821907] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:04.028 [2024-11-20 09:14:42.822076] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:04.028 [2024-11-20 09:14:42.822223] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.822340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:04.028 [2024-11-20 09:14:42.822471] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:04.028 [2024-11-20 09:14:42.822654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.029 [2024-11-20 09:14:42.822839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x235d4a0 with addr=10.0.0.4, port=4420 00:21:04.029 [2024-11-20 09:14:42.822908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235d4a0 is same with the state(6) to be set 00:21:04.029 [2024-11-20 09:14:42.823064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x235d4a0 (9): Bad file descriptor 00:21:04.029 [2024-11-20 09:14:42.823105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:04.029 [2024-11-20 09:14:42.823116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:04.029 [2024-11-20 09:14:42.823126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:04.029 [2024-11-20 09:14:42.823135] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:04.029 [2024-11-20 09:14:42.823141] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:04.029 [2024-11-20 09:14:42.823146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:04.029 [2024-11-20 09:14:42.827927] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:04.029 [2024-11-20 09:14:42.827950] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:04.029 [2024-11-20 09:14:42.827956] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:04.029 [2024-11-20 09:14:42.827962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:04.029 [2024-11-20 09:14:42.827992] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:04.029 [2024-11-20 09:14:42.828051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:04.029 [2024-11-20 09:14:42.828071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x234f7f0 with addr=10.0.0.2, port=4420 00:21:04.029 [2024-11-20 09:14:42.828083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234f7f0 is same with the state(6) to be set 00:21:04.029 [2024-11-20 09:14:42.828099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x234f7f0 (9): Bad file descriptor 00:21:04.029 [2024-11-20 09:14:42.828133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:04.029 [2024-11-20 09:14:42.828144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:04.029 [2024-11-20 09:14:42.828154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:04.029 [2024-11-20 09:14:42.828162] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:04.029 [2024-11-20 09:14:42.828168] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:04.029 [2024-11-20 09:14:42.828173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:04.029 [2024-11-20 09:14:42.830146] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:04.029 [2024-11-20 09:14:42.830285] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:04.029 [2024-11-20 09:14:42.830332] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:04.029 [2024-11-20 09:14:42.830382] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:21:04.029 [2024-11-20 09:14:42.830401] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:04.029 [2024-11-20 09:14:42.830417] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:04.029 [2024-11-20 09:14:42.916231] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:04.029 [2024-11-20 09:14:42.916308] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.964 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.222 09:14:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.222 09:14:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:21:05.222 [2024-11-20 09:14:44.054070] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:06.157 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:06.416 [2024-11-20 09:14:45.255608] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:06.416 2024/11/20 09:14:45 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:06.416 request: 00:21:06.416 { 00:21:06.416 "method": "bdev_nvme_start_mdns_discovery", 00:21:06.416 "params": { 00:21:06.416 "name": "mdns", 00:21:06.416 "svcname": "_nvme-disc._http", 00:21:06.416 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:06.416 } 00:21:06.416 } 00:21:06.416 Got JSON-RPC error response 00:21:06.416 GoRPCClient: error on JSON-RPC call 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:06.416 09:14:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:21:06.983 [2024-11-20 09:14:45.844350] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:07.241 [2024-11-20 09:14:45.944349] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:07.241 [2024-11-20 09:14:46.044368] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:07.241 [2024-11-20 09:14:46.044724] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:07.241 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:07.241 cookie is 0 00:21:07.241 is_local: 1 00:21:07.241 our_own: 0 00:21:07.241 wide_area: 0 00:21:07.241 multicast: 1 00:21:07.241 cached: 1 00:21:07.241 [2024-11-20 09:14:46.144362] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:07.241 [2024-11-20 09:14:46.144668] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:07.241 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:07.241 cookie is 0 00:21:07.241 is_local: 1 00:21:07.241 our_own: 0 00:21:07.241 wide_area: 0 00:21:07.241 multicast: 1 00:21:07.241 cached: 1 00:21:07.242 [2024-11-20 09:14:46.144896] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:07.500 [2024-11-20 09:14:46.244367] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:07.500 [2024-11-20 09:14:46.244642] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:21:07.500 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:07.500 cookie is 0 00:21:07.500 is_local: 1 00:21:07.500 our_own: 0 00:21:07.500 wide_area: 0 00:21:07.500 multicast: 1 00:21:07.500 cached: 1 00:21:07.500 [2024-11-20 09:14:46.344365] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:07.500 [2024-11-20 09:14:46.344408] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.2) 00:21:07.500 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:07.500 cookie is 0 00:21:07.500 is_local: 1 00:21:07.500 our_own: 0 00:21:07.500 wide_area: 0 00:21:07.500 multicast: 1 00:21:07.500 cached: 1 00:21:07.500 [2024-11-20 09:14:46.344423] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:21:08.437 [2024-11-20 09:14:47.058066] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:08.437 [2024-11-20 09:14:47.058100] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:08.437 [2024-11-20 09:14:47.058120] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:08.437 [2024-11-20 09:14:47.144207] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:21:08.437 [2024-11-20 09:14:47.202690] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:21:08.437 [2024-11-20 09:14:47.203439] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x233afe0:1 started. 00:21:08.437 [2024-11-20 09:14:47.205189] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:08.437 [2024-11-20 09:14:47.205209] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:08.437 [2024-11-20 09:14:47.206869] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x233afe0 was disconnected and freed. delete nvme_qpair. 00:21:08.437 [2024-11-20 09:14:47.257966] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:08.437 [2024-11-20 09:14:47.258154] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:08.437 [2024-11-20 09:14:47.258218] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:08.437 [2024-11-20 09:14:47.344082] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:21:08.696 [2024-11-20 09:14:47.402817] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:21:08.696 [2024-11-20 09:14:47.403676] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2359140:1 started. 00:21:08.696 [2024-11-20 09:14:47.405572] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:21:08.696 [2024-11-20 09:14:47.405742] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:08.696 [2024-11-20 09:14:47.407180] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2359140 was disconnected and freed. delete nvme_qpair. 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.985 [2024-11-20 09:14:50.443294] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:11.985 2024/11/20 09:14:50 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:11.985 request: 00:21:11.985 { 00:21:11.985 "method": "bdev_nvme_start_mdns_discovery", 00:21:11.985 "params": { 00:21:11.985 "name": "cdc", 00:21:11.985 "svcname": "_nvme-disc._tcp", 00:21:11.985 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:11.985 } 00:21:11.985 } 00:21:11.985 Got JSON-RPC error response 00:21:11.985 GoRPCClient: error on JSON-RPC call 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.2 8009 found 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.2 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:11.985 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:11.986 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:11.986 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:11.986 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:11.986 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:11.986 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:11.986 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:11.986 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\2* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\2* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\2* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\2* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.986 09:14:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:21:11.986 [2024-11-20 09:14:50.644762] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.2 8009 'not found' 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.2 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:12.952 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:12.952 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:12.952 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.2;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:12.952 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 95524 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 95524 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 95550 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:21:12.953 Got SIGTERM, quitting. 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:12.953 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@99 -- # sync 00:21:12.953 Leaving mDNS multicast group on interface target1.IPv4 with address 10.0.0.4. 00:21:12.953 Leaving mDNS multicast group on interface target0.IPv4 with address 10.0.0.2. 00:21:12.953 avahi-daemon 0.8 exiting. 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@102 -- # set +e 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:13.211 rmmod nvme_tcp 00:21:13.211 rmmod nvme_fabrics 00:21:13.211 rmmod nvme_keyring 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@106 -- # set -e 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@107 -- # return 0 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@336 -- # '[' -n 95483 ']' 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@337 -- # killprocess 95483 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 95483 ']' 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 95483 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95483 00:21:13.211 killing process with pid 95483 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95483' 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 95483 00:21:13.211 09:14:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 95483 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@254 -- # local dev 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # continue 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@261 -- # continue 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/setup.sh@274 -- # iptr 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@548 -- # iptables-save 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:21:13.470 00:21:13.470 real 0m22.419s 00:21:13.470 user 0m44.093s 00:21:13.470 sys 0m2.240s 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.470 ************************************ 00:21:13.470 END TEST nvmf_mdns_discovery 00:21:13.470 ************************************ 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@44 -- # [[ 1 -eq 1 ]] 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@45 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.470 ************************************ 00:21:13.470 START TEST nvmf_host_multipath 00:21:13.470 ************************************ 00:21:13.470 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:13.730 * Looking for test storage... 00:21:13.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:13.730 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.731 --rc genhtml_branch_coverage=1 00:21:13.731 --rc genhtml_function_coverage=1 00:21:13.731 --rc genhtml_legend=1 00:21:13.731 --rc geninfo_all_blocks=1 00:21:13.731 --rc geninfo_unexecuted_blocks=1 00:21:13.731 00:21:13.731 ' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.731 --rc genhtml_branch_coverage=1 00:21:13.731 --rc genhtml_function_coverage=1 00:21:13.731 --rc genhtml_legend=1 00:21:13.731 --rc geninfo_all_blocks=1 00:21:13.731 --rc geninfo_unexecuted_blocks=1 00:21:13.731 00:21:13.731 ' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.731 --rc genhtml_branch_coverage=1 00:21:13.731 --rc genhtml_function_coverage=1 00:21:13.731 --rc genhtml_legend=1 00:21:13.731 --rc geninfo_all_blocks=1 00:21:13.731 --rc geninfo_unexecuted_blocks=1 00:21:13.731 00:21:13.731 ' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:13.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.731 --rc genhtml_branch_coverage=1 00:21:13.731 --rc genhtml_function_coverage=1 00:21:13.731 --rc genhtml_legend=1 00:21:13.731 --rc geninfo_all_blocks=1 00:21:13.731 --rc geninfo_unexecuted_blocks=1 00:21:13.731 00:21:13.731 ' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@50 -- # : 0 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:13.731 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.731 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # return 0 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:13.732 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:21:13.992 10.0.0.1 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:13.992 10.0.0.2 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:13.992 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # ips=() 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:21:13.993 10.0.0.3 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:21:13.993 10.0.0.4 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:21:13.993 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:21:14.253 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:14.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:14.254 00:21:14.254 --- 10.0.0.1 ping statistics --- 00:21:14.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.254 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:14.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:21:14.254 00:21:14.254 --- 10.0.0.2 ping statistics --- 00:21:14.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.254 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:14.254 09:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:21:14.254 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.254 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:21:14.254 00:21:14.254 --- 10.0.0.3 ping statistics --- 00:21:14.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.254 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:21:14.254 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:14.254 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:21:14.254 00:21:14.254 --- 10.0.0.4 ping statistics --- 00:21:14.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.254 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@281 -- # return 0 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:14.254 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@101 -- # echo target1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@328 -- # nvmfpid=96191 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@329 -- # waitforlisten 96191 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96191 ']' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.255 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:14.514 [2024-11-20 09:14:53.191424] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:21:14.514 [2024-11-20 09:14:53.191545] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.514 [2024-11-20 09:14:53.342214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:14.514 [2024-11-20 09:14:53.403360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.514 [2024-11-20 09:14:53.403435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.514 [2024-11-20 09:14:53.403463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.514 [2024-11-20 09:14:53.403472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.514 [2024-11-20 09:14:53.403479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.514 [2024-11-20 09:14:53.404683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.514 [2024-11-20 09:14:53.404692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96191 00:21:14.773 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:15.031 [2024-11-20 09:14:53.863957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.031 09:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:15.598 Malloc0 00:21:15.598 09:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:15.855 09:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.113 09:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.371 [2024-11-20 09:14:55.159361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.371 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:16.630 [2024-11-20 09:14:55.451529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96280 00:21:16.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96280 /var/tmp/bdevperf.sock 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96280 ']' 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.630 09:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:18.004 09:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.004 09:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:18.004 09:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:18.004 09:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:18.263 Nvme0n1 00:21:18.521 09:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:18.780 Nvme0n1 00:21:18.780 09:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:18.780 09:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:19.714 09:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:19.714 09:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:20.281 09:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:20.538 09:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:20.538 09:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96191 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:20.538 09:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96374 00:21:20.538 09:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:27.140 Attaching 4 probes... 00:21:27.140 @path[10.0.0.2, 4421]: 17281 00:21:27.140 @path[10.0.0.2, 4421]: 16273 00:21:27.140 @path[10.0.0.2, 4421]: 18068 00:21:27.140 @path[10.0.0.2, 4421]: 17137 00:21:27.140 @path[10.0.0.2, 4421]: 17369 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96374 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:27.140 09:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:27.399 09:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:27.399 09:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96191 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:27.399 09:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96513 00:21:27.399 09:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.962 Attaching 4 probes... 00:21:33.962 @path[10.0.0.2, 4420]: 17271 00:21:33.962 @path[10.0.0.2, 4420]: 17436 00:21:33.962 @path[10.0.0.2, 4420]: 17663 00:21:33.962 @path[10.0.0.2, 4420]: 17510 00:21:33.962 @path[10.0.0.2, 4420]: 17223 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96513 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:33.962 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:34.221 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:34.221 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96642 00:21:34.221 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:34.221 09:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96191 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:40.784 09:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:40.784 09:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.784 Attaching 4 probes... 00:21:40.784 @path[10.0.0.2, 4421]: 13248 00:21:40.784 @path[10.0.0.2, 4421]: 17408 00:21:40.784 @path[10.0.0.2, 4421]: 17085 00:21:40.784 @path[10.0.0.2, 4421]: 15711 00:21:40.784 @path[10.0.0.2, 4421]: 16941 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96642 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:40.784 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:21:41.042 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:41.042 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96774 00:21:41.042 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96191 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:41.042 09:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:47.606 09:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:47.606 09:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.606 Attaching 4 probes... 00:21:47.606 00:21:47.606 00:21:47.606 00:21:47.606 00:21:47.606 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96774 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:47.606 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:47.865 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:48.123 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:48.123 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96911 00:21:48.123 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96191 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:48.123 09:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:54.681 09:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:54.681 09:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.681 Attaching 4 probes... 00:21:54.681 @path[10.0.0.2, 4421]: 16667 00:21:54.681 @path[10.0.0.2, 4421]: 17161 00:21:54.681 @path[10.0.0.2, 4421]: 17195 00:21:54.681 @path[10.0.0.2, 4421]: 17053 00:21:54.681 @path[10.0.0.2, 4421]: 17193 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96911 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:54.681 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:54.681 [2024-11-20 09:15:33.409804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.409994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.410003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.410011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.681 [2024-11-20 09:15:33.410019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 [2024-11-20 09:15:33.410242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acee90 is same with the state(6) to be set 00:21:54.682 09:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:55.615 09:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:55.615 09:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97041 00:21:55.615 09:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96191 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:55.615 09:15:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:02.194 Attaching 4 probes... 00:22:02.194 @path[10.0.0.2, 4420]: 16798 00:22:02.194 @path[10.0.0.2, 4420]: 16888 00:22:02.194 @path[10.0.0.2, 4420]: 17341 00:22:02.194 @path[10.0.0.2, 4420]: 17906 00:22:02.194 @path[10.0.0.2, 4420]: 17535 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97041 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:02.194 09:15:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:02.194 [2024-11-20 09:15:41.047462] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:02.194 09:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:02.761 09:15:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:09.319 09:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:09.319 09:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97234 00:22:09.319 09:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96191 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:09.319 09:15:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:14.608 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:14.608 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.867 Attaching 4 probes... 00:22:14.867 @path[10.0.0.2, 4421]: 16656 00:22:14.867 @path[10.0.0.2, 4421]: 17021 00:22:14.867 @path[10.0.0.2, 4421]: 17108 00:22:14.867 @path[10.0.0.2, 4421]: 17135 00:22:14.867 @path[10.0.0.2, 4421]: 17354 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97234 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96280 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96280 ']' 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96280 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96280 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:14.867 killing process with pid 96280 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96280' 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96280 00:22:14.867 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96280 00:22:15.134 { 00:22:15.134 "results": [ 00:22:15.134 { 00:22:15.134 "job": "Nvme0n1", 00:22:15.134 "core_mask": "0x4", 00:22:15.134 "workload": "verify", 00:22:15.134 "status": "terminated", 00:22:15.134 "verify_range": { 00:22:15.134 "start": 0, 00:22:15.134 "length": 16384 00:22:15.134 }, 00:22:15.134 "queue_depth": 128, 00:22:15.134 "io_size": 4096, 00:22:15.134 "runtime": 56.059113, 00:22:15.134 "iops": 7374.626137948348, 00:22:15.134 "mibps": 28.807133351360733, 00:22:15.134 "io_failed": 0, 00:22:15.134 "io_timeout": 0, 00:22:15.134 "avg_latency_us": 17326.220333844594, 00:22:15.134 "min_latency_us": 603.2290909090909, 00:22:15.134 "max_latency_us": 7046430.72 00:22:15.134 } 00:22:15.134 ], 00:22:15.134 "core_count": 1 00:22:15.134 } 00:22:15.135 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96280 00:22:15.135 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:15.135 [2024-11-20 09:14:55.519085] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:22:15.135 [2024-11-20 09:14:55.519193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96280 ] 00:22:15.135 [2024-11-20 09:14:55.669071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.135 [2024-11-20 09:14:55.737779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.135 Running I/O for 90 seconds... 00:22:15.135 9142.00 IOPS, 35.71 MiB/s [2024-11-20T09:15:54.054Z] 9106.00 IOPS, 35.57 MiB/s [2024-11-20T09:15:54.054Z] 9022.33 IOPS, 35.24 MiB/s [2024-11-20T09:15:54.054Z] 8795.75 IOPS, 34.36 MiB/s [2024-11-20T09:15:54.054Z] 8836.40 IOPS, 34.52 MiB/s [2024-11-20T09:15:54.054Z] 8809.17 IOPS, 34.41 MiB/s [2024-11-20T09:15:54.054Z] 8785.57 IOPS, 34.32 MiB/s [2024-11-20T09:15:54.054Z] 8777.00 IOPS, 34.29 MiB/s [2024-11-20T09:15:54.054Z] [2024-11-20 09:15:06.039128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.039967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.039985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.040445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.040477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.041090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.041121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.041150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.041169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.041191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.041208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.041230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.041247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.041269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:60272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.041286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:15.135 [2024-11-20 09:15:06.041308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.135 [2024-11-20 09:15:06.041325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:60288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.041381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.041435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.041494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.041534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.041574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.041614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.041654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.041694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.041735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.041790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.041833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.041873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.041917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.041951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.041981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:60400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:60416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.136 [2024-11-20 09:15:06.042503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.042971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.042993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.136 [2024-11-20 09:15:06.043010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:15.136 [2024-11-20 09:15:06.043032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.137 [2024-11-20 09:15:06.043881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.043920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.043959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.043980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.043997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:60456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.137 [2024-11-20 09:15:06.044315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:15.137 [2024-11-20 09:15:06.044337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.044354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.044375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.044425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.044457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.044479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.044513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.044541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.044559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.045959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.045984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:06.046389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:06.046406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:15.138 8765.11 IOPS, 34.24 MiB/s [2024-11-20T09:15:54.057Z] 8757.80 IOPS, 34.21 MiB/s [2024-11-20T09:15:54.057Z] 8769.27 IOPS, 34.25 MiB/s [2024-11-20T09:15:54.057Z] 8772.33 IOPS, 34.27 MiB/s [2024-11-20T09:15:54.057Z] 8769.00 IOPS, 34.25 MiB/s [2024-11-20T09:15:54.057Z] 8756.29 IOPS, 34.20 MiB/s [2024-11-20T09:15:54.057Z] [2024-11-20 09:15:12.649690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:12.649752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:12.649866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:12.649891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:12.649918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:12.649935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:12.649984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:12.650032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:12.650059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:12.650077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:12.650100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:12.650117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:15.138 [2024-11-20 09:15:12.650139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.138 [2024-11-20 09:15:12.650156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.139 [2024-11-20 09:15:12.650196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.139 [2024-11-20 09:15:12.650476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.139 [2024-11-20 09:15:12.650523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.139 [2024-11-20 09:15:12.650564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.139 [2024-11-20 09:15:12.650604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.650965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.650989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.139 [2024-11-20 09:15:12.651760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:15.139 [2024-11-20 09:15:12.651783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.651800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.651836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.651855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.651887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.651905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.651929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.651946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.651969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.651986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.652751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.652809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.652870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.652913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.652956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.652982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.652999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.140 [2024-11-20 09:15:12.653469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.653512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.653554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.653596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.653639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:15.140 [2024-11-20 09:15:12.653665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.140 [2024-11-20 09:15:12.653682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.653817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.653844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.653875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.653894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.653922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.653953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.653984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.654958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.654975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.655028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.655076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.141 [2024-11-20 09:15:12.655120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.141 [2024-11-20 09:15:12.655549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:15.141 [2024-11-20 09:15:12.655581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.655969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.655987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.656014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.656031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.656059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.656076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.656103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.656120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.656147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.656173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.656202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.656220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:12.656247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:12.656280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:15.142 8712.60 IOPS, 34.03 MiB/s [2024-11-20T09:15:54.061Z] 8200.81 IOPS, 32.03 MiB/s [2024-11-20T09:15:54.061Z] 8227.88 IOPS, 32.14 MiB/s [2024-11-20T09:15:54.061Z] 8249.17 IOPS, 32.22 MiB/s [2024-11-20T09:15:54.061Z] 8266.26 IOPS, 32.29 MiB/s [2024-11-20T09:15:54.061Z] 8239.90 IOPS, 32.19 MiB/s [2024-11-20T09:15:54.061Z] 8255.81 IOPS, 32.25 MiB/s [2024-11-20T09:15:54.061Z] 8274.41 IOPS, 32.32 MiB/s [2024-11-20T09:15:54.061Z] [2024-11-20 09:15:19.894372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.894526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.894575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.894616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.894655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.894695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.894734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.894789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.894808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.895111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.142 [2024-11-20 09:15:19.895188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:15.142 [2024-11-20 09:15:19.895609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.142 [2024-11-20 09:15:19.895626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.895962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.895979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.896978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.896995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.897019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.897036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.897059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.897076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.897099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.897116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.897139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.897156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.897179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.897196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.897228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.143 [2024-11-20 09:15:19.897247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:15.143 [2024-11-20 09:15:19.897270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.144 [2024-11-20 09:15:19.897835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.897881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.897921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.897974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.897998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:15.144 [2024-11-20 09:15:19.898451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.144 [2024-11-20 09:15:19.898476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.898709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.898737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.898784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.898805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.898834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.898853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.898881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.898898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.898926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.898943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.898971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.898988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.899968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.899986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.145 [2024-11-20 09:15:19.900495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:15.145 [2024-11-20 09:15:19.900523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.146 [2024-11-20 09:15:19.900541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:19.900569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.146 [2024-11-20 09:15:19.900586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:15.146 7974.39 IOPS, 31.15 MiB/s [2024-11-20T09:15:54.065Z] 7642.12 IOPS, 29.85 MiB/s [2024-11-20T09:15:54.065Z] 7336.44 IOPS, 28.66 MiB/s [2024-11-20T09:15:54.065Z] 7054.27 IOPS, 27.56 MiB/s [2024-11-20T09:15:54.065Z] 6793.00 IOPS, 26.54 MiB/s [2024-11-20T09:15:54.065Z] 6550.39 IOPS, 25.59 MiB/s [2024-11-20T09:15:54.065Z] 6324.52 IOPS, 24.71 MiB/s [2024-11-20T09:15:54.065Z] 6347.93 IOPS, 24.80 MiB/s [2024-11-20T09:15:54.065Z] 6418.52 IOPS, 25.07 MiB/s [2024-11-20T09:15:54.065Z] 6486.56 IOPS, 25.34 MiB/s [2024-11-20T09:15:54.065Z] 6549.85 IOPS, 25.59 MiB/s [2024-11-20T09:15:54.065Z] 6607.79 IOPS, 25.81 MiB/s [2024-11-20T09:15:54.065Z] 6664.03 IOPS, 26.03 MiB/s [2024-11-20T09:15:54.065Z] [2024-11-20 09:15:33.409739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.146 [2024-11-20 09:15:33.409836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.409898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.146 [2024-11-20 09:15:33.409921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.409987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.410959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.410984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.411037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.411076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.411115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.411154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.411192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.411232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.146 [2024-11-20 09:15:33.411273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:15.146 [2024-11-20 09:15:33.411645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.411928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.411959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.411975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.411989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.147 [2024-11-20 09:15:33.412545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.412576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.412607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.412645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.412684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.412716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.147 [2024-11-20 09:15:33.412747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.147 [2024-11-20 09:15:33.412776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:78120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.412792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.412808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.412823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.412839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.412853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.412869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.412883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.412899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.412913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.412929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.412943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.412959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.412974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.412990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:15.148 [2024-11-20 09:15:33.413638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.413982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.413998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.414012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.148 [2024-11-20 09:15:33.414028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.148 [2024-11-20 09:15:33.414042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.414665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.414680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.416350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:15.149 [2024-11-20 09:15:33.416450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:15.149 [2024-11-20 09:15:33.416477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:15.149 [2024-11-20 09:15:33.416510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99190 (9): Bad file descriptor 00:22:15.149 [2024-11-20 09:15:33.416650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:15.149 [2024-11-20 09:15:33.416683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b99190 with addr=10.0.0.2, port=4421 00:22:15.149 [2024-11-20 09:15:33.416701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99190 is same with the state(6) to be set 00:22:15.149 [2024-11-20 09:15:33.418345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99190 (9): Bad file descriptor 00:22:15.149 [2024-11-20 09:15:33.418989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:15.149 [2024-11-20 09:15:33.419020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:15.149 [2024-11-20 09:15:33.419036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:15.149 [2024-11-20 09:15:33.419051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:15.149 [2024-11-20 09:15:33.419067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:15.149 6716.75 IOPS, 26.24 MiB/s [2024-11-20T09:15:54.068Z] 6766.16 IOPS, 26.43 MiB/s [2024-11-20T09:15:54.068Z] 6810.82 IOPS, 26.60 MiB/s [2024-11-20T09:15:54.068Z] 6852.54 IOPS, 26.77 MiB/s [2024-11-20T09:15:54.068Z] 6897.62 IOPS, 26.94 MiB/s [2024-11-20T09:15:54.068Z] 6945.54 IOPS, 27.13 MiB/s [2024-11-20T09:15:54.068Z] 6991.21 IOPS, 27.31 MiB/s [2024-11-20T09:15:54.068Z] 7026.65 IOPS, 27.45 MiB/s [2024-11-20T09:15:54.068Z] 7061.98 IOPS, 27.59 MiB/s [2024-11-20T09:15:54.068Z] 7093.53 IOPS, 27.71 MiB/s [2024-11-20T09:15:54.068Z] [2024-11-20 09:15:43.515923] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:15.149 7125.07 IOPS, 27.83 MiB/s [2024-11-20T09:15:54.068Z] 7154.57 IOPS, 27.95 MiB/s [2024-11-20T09:15:54.068Z] 7182.73 IOPS, 28.06 MiB/s [2024-11-20T09:15:54.068Z] 7210.04 IOPS, 28.16 MiB/s [2024-11-20T09:15:54.068Z] 7236.16 IOPS, 28.27 MiB/s [2024-11-20T09:15:54.068Z] 7256.61 IOPS, 28.35 MiB/s [2024-11-20T09:15:54.068Z] 7280.44 IOPS, 28.44 MiB/s [2024-11-20T09:15:54.068Z] 7303.32 IOPS, 28.53 MiB/s [2024-11-20T09:15:54.068Z] 7328.19 IOPS, 28.63 MiB/s [2024-11-20T09:15:54.068Z] 7352.18 IOPS, 28.72 MiB/s [2024-11-20T09:15:54.068Z] 7375.59 IOPS, 28.81 MiB/s [2024-11-20T09:15:54.068Z] Received shutdown signal, test time was about 56.059946 seconds 00:22:15.149 00:22:15.149 Latency(us) 00:22:15.149 [2024-11-20T09:15:54.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.149 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:15.149 Verification LBA range: start 0x0 length 0x4000 00:22:15.149 Nvme0n1 : 56.06 7374.63 28.81 0.00 0.00 17326.22 603.23 7046430.72 00:22:15.149 [2024-11-20T09:15:54.068Z] =================================================================================================================== 00:22:15.149 [2024-11-20T09:15:54.068Z] Total : 7374.63 28.81 0.00 0.00 17326.22 603.23 7046430.72 00:22:15.150 09:15:53 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.408 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:15.408 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:15.408 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:15.408 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:15.408 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@99 -- # sync 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@102 -- # set +e 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:15.666 rmmod nvme_tcp 00:22:15.666 rmmod nvme_fabrics 00:22:15.666 rmmod nvme_keyring 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@106 -- # set -e 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@107 -- # return 0 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@336 -- # '[' -n 96191 ']' 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@337 -- # killprocess 96191 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96191 ']' 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96191 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96191 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96191' 00:22:15.666 killing process with pid 96191 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96191 00:22:15.666 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96191 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@254 -- # local dev 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@261 -- # continue 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/setup.sh@274 -- # iptr 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-save 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:15.924 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:22:16.181 00:22:16.181 real 1m2.480s 00:22:16.181 user 2m58.092s 00:22:16.181 sys 0m13.391s 00:22:16.181 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.181 09:15:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:16.181 ************************************ 00:22:16.181 END TEST nvmf_host_multipath 00:22:16.181 ************************************ 00:22:16.181 09:15:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:16.181 09:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.181 09:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.181 09:15:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.181 ************************************ 00:22:16.181 START TEST nvmf_timeout 00:22:16.181 ************************************ 00:22:16.181 09:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:16.181 * Looking for test storage... 00:22:16.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:16.182 09:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:16.182 09:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:22:16.182 09:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.182 --rc genhtml_branch_coverage=1 00:22:16.182 --rc genhtml_function_coverage=1 00:22:16.182 --rc genhtml_legend=1 00:22:16.182 --rc geninfo_all_blocks=1 00:22:16.182 --rc geninfo_unexecuted_blocks=1 00:22:16.182 00:22:16.182 ' 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.182 --rc genhtml_branch_coverage=1 00:22:16.182 --rc genhtml_function_coverage=1 00:22:16.182 --rc genhtml_legend=1 00:22:16.182 --rc geninfo_all_blocks=1 00:22:16.182 --rc geninfo_unexecuted_blocks=1 00:22:16.182 00:22:16.182 ' 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.182 --rc genhtml_branch_coverage=1 00:22:16.182 --rc genhtml_function_coverage=1 00:22:16.182 --rc genhtml_legend=1 00:22:16.182 --rc geninfo_all_blocks=1 00:22:16.182 --rc geninfo_unexecuted_blocks=1 00:22:16.182 00:22:16.182 ' 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:16.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.182 --rc genhtml_branch_coverage=1 00:22:16.182 --rc genhtml_function_coverage=1 00:22:16.182 --rc genhtml_legend=1 00:22:16.182 --rc geninfo_all_blocks=1 00:22:16.182 --rc geninfo_unexecuted_blocks=1 00:22:16.182 00:22:16.182 ' 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:16.182 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@50 -- # : 0 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:16.440 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@260 -- # remove_target_ns 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@280 -- # nvmf_veth_init 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@223 -- # create_target_ns 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@224 -- # create_main_bridge 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@105 -- # delete_main_bridge 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # return 0 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:22:16.440 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@28 -- # local -g _dev 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator0 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target0 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0 up 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target0 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772161 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:16.441 10.0.0.1 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772162 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:16.441 10.0.0.2 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator0 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:22:16.441 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target0_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # ips=() 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up initiator1 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@151 -- # set_up target1 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1 up 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@152 -- # set_up target1_br 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@61 -- # add_to_ns target1 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772163 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:22:16.442 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:22:16.442 10.0.0.3 00:22:16.700 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:22:16.700 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:22:16.700 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.700 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@11 -- # local val=167772164 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:22:16.701 10.0.0.4 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@66 -- # set_up initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@129 -- # set_up target1_br 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@38 -- # ping_ips 2 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:16.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:22:16.701 00:22:16.701 --- 10.0.0.1 ping statistics --- 00:22:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.701 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:16.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:22:16.701 00:22:16.701 --- 10.0.0.2 ping statistics --- 00:22:16.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.701 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:16.701 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:22:16.702 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:16.702 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:22:16.702 00:22:16.702 --- 10.0.0.3 ping statistics --- 00:22:16.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.702 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:22:16.702 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:16.702 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:22:16.702 00:22:16.702 --- 10.0.0.4 ping statistics --- 00:22:16.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.702 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@281 -- # return 0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo initiator1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=initiator1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target0 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@98 -- # local dev=target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@101 -- # echo target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@159 -- # dev=target1 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:22:16.702 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@328 -- # nvmfpid=97612 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@329 -- # waitforlisten 97612 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97612 ']' 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.703 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:16.961 [2024-11-20 09:15:55.659302] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:22:16.961 [2024-11-20 09:15:55.659426] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.961 [2024-11-20 09:15:55.810168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:16.961 [2024-11-20 09:15:55.857247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.961 [2024-11-20 09:15:55.857338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.961 [2024-11-20 09:15:55.857364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.961 [2024-11-20 09:15:55.857372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.961 [2024-11-20 09:15:55.857380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.961 [2024-11-20 09:15:55.858638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.961 [2024-11-20 09:15:55.858629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.218 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.218 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:17.218 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:17.218 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:17.218 09:15:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:17.218 09:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.218 09:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.218 09:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:17.475 [2024-11-20 09:15:56.327534] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.475 09:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:18.041 Malloc0 00:22:18.041 09:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:18.299 09:15:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.557 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.557 [2024-11-20 09:15:57.468285] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97690 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97690 /var/tmp/bdevperf.sock 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97690 ']' 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.816 09:15:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:18.816 [2024-11-20 09:15:57.538528] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:22:18.816 [2024-11-20 09:15:57.538608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97690 ] 00:22:18.816 [2024-11-20 09:15:57.678629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.073 [2024-11-20 09:15:57.744998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.639 09:15:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:19.639 09:15:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:19.639 09:15:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:19.897 09:15:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:20.537 NVMe0n1 00:22:20.537 09:15:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:20.537 09:15:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97737 00:22:20.537 09:15:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:20.537 Running I/O for 10 seconds... 00:22:21.476 09:16:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:21.738 7585.00 IOPS, 29.63 MiB/s [2024-11-20T09:16:00.657Z] [2024-11-20 09:16:00.421864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.421939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.421972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.421986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.421999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8156b0 is same with the state(6) to be set 00:22:21.738 [2024-11-20 09:16:00.422610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.738 [2024-11-20 09:16:00.422653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.422968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.422988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.738 [2024-11-20 09:16:00.423084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.738 [2024-11-20 09:16:00.423373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.738 [2024-11-20 09:16:00.423388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.423987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.423998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.739 [2024-11-20 09:16:00.424429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.739 [2024-11-20 09:16:00.424442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.424986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.424998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.740 [2024-11-20 09:16:00.425484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.740 [2024-11-20 09:16:00.425495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.741 [2024-11-20 09:16:00.425505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.741 [2024-11-20 09:16:00.425539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.741 [2024-11-20 09:16:00.425571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.741 [2024-11-20 09:16:00.425598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:70896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:21.741 [2024-11-20 09:16:00.425629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.425964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.425991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.426006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.426018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:21.741 [2024-11-20 09:16:00.426027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.426038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea7af0 is same with the state(6) to be set 00:22:21.741 [2024-11-20 09:16:00.426051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:21.741 [2024-11-20 09:16:00.426063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:21.741 [2024-11-20 09:16:00.426072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71032 len:8 PRP1 0x0 PRP2 0x0 00:22:21.741 [2024-11-20 09:16:00.426082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:21.741 [2024-11-20 09:16:00.426434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.741 [2024-11-20 09:16:00.426531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3bf50 (9): Bad file descriptor 00:22:21.741 [2024-11-20 09:16:00.426650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:21.741 [2024-11-20 09:16:00.426679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3bf50 with addr=10.0.0.2, port=4420 00:22:21.741 [2024-11-20 09:16:00.426691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3bf50 is same with the state(6) to be set 00:22:21.741 [2024-11-20 09:16:00.426710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3bf50 (9): Bad file descriptor 00:22:21.741 [2024-11-20 09:16:00.426730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:21.741 [2024-11-20 09:16:00.426747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:21.741 [2024-11-20 09:16:00.426786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:21.741 [2024-11-20 09:16:00.426804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:21.741 [2024-11-20 09:16:00.426815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:21.741 09:16:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:23.614 4376.00 IOPS, 17.09 MiB/s [2024-11-20T09:16:02.533Z] 2917.33 IOPS, 11.40 MiB/s [2024-11-20T09:16:02.533Z] [2024-11-20 09:16:02.427001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:23.615 [2024-11-20 09:16:02.427081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3bf50 with addr=10.0.0.2, port=4420 00:22:23.615 [2024-11-20 09:16:02.427098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3bf50 is same with the state(6) to be set 00:22:23.615 [2024-11-20 09:16:02.427124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3bf50 (9): Bad file descriptor 00:22:23.615 [2024-11-20 09:16:02.427159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:23.615 [2024-11-20 09:16:02.427171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:23.615 [2024-11-20 09:16:02.427182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:23.615 [2024-11-20 09:16:02.427194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:23.615 [2024-11-20 09:16:02.427206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:23.615 09:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:23.615 09:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.615 09:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:23.873 09:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:23.873 09:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:23.873 09:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:23.873 09:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:24.131 09:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:24.131 09:16:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:25.326 2188.00 IOPS, 8.55 MiB/s [2024-11-20T09:16:04.504Z] 1750.40 IOPS, 6.84 MiB/s [2024-11-20T09:16:04.504Z] [2024-11-20 09:16:04.427436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:25.585 [2024-11-20 09:16:04.427508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3bf50 with addr=10.0.0.2, port=4420 00:22:25.585 [2024-11-20 09:16:04.427526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3bf50 is same with the state(6) to be set 00:22:25.585 [2024-11-20 09:16:04.427552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3bf50 (9): Bad file descriptor 00:22:25.585 [2024-11-20 09:16:04.427573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:25.585 [2024-11-20 09:16:04.427583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:25.585 [2024-11-20 09:16:04.427594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:25.585 [2024-11-20 09:16:04.427606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:25.585 [2024-11-20 09:16:04.427618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:27.456 1458.67 IOPS, 5.70 MiB/s [2024-11-20T09:16:06.634Z] 1250.29 IOPS, 4.88 MiB/s [2024-11-20T09:16:06.634Z] [2024-11-20 09:16:06.427772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:27.715 [2024-11-20 09:16:06.427839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:27.715 [2024-11-20 09:16:06.427852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:27.715 [2024-11-20 09:16:06.427862] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:27.715 [2024-11-20 09:16:06.427875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:28.652 1094.00 IOPS, 4.27 MiB/s 00:22:28.652 Latency(us) 00:22:28.652 [2024-11-20T09:16:07.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.652 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:28.652 Verification LBA range: start 0x0 length 0x4000 00:22:28.652 NVMe0n1 : 8.20 1067.91 4.17 15.62 0.00 117976.24 2129.92 7046430.72 00:22:28.652 [2024-11-20T09:16:07.571Z] =================================================================================================================== 00:22:28.652 [2024-11-20T09:16:07.571Z] Total : 1067.91 4.17 15.62 0.00 117976.24 2129.92 7046430.72 00:22:28.652 { 00:22:28.652 "results": [ 00:22:28.652 { 00:22:28.652 "job": "NVMe0n1", 00:22:28.652 "core_mask": "0x4", 00:22:28.652 "workload": "verify", 00:22:28.652 "status": "finished", 00:22:28.652 "verify_range": { 00:22:28.652 "start": 0, 00:22:28.652 "length": 16384 00:22:28.652 }, 00:22:28.652 "queue_depth": 128, 00:22:28.652 "io_size": 4096, 00:22:28.652 "runtime": 8.195443, 00:22:28.652 "iops": 1067.9105449211227, 00:22:28.652 "mibps": 4.171525566098135, 00:22:28.652 "io_failed": 128, 00:22:28.652 "io_timeout": 0, 00:22:28.652 "avg_latency_us": 117976.2417952498, 00:22:28.652 "min_latency_us": 2129.92, 00:22:28.652 "max_latency_us": 7046430.72 00:22:28.652 } 00:22:28.652 ], 00:22:28.652 "core_count": 1 00:22:28.652 } 00:22:29.230 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:29.230 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:29.230 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:29.507 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:29.507 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:29.507 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:29.507 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:29.766 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:29.766 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97737 00:22:29.766 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97690 00:22:29.766 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97690 ']' 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97690 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97690 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:29.767 killing process with pid 97690 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97690' 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97690 00:22:29.767 Received shutdown signal, test time was about 9.403550 seconds 00:22:29.767 00:22:29.767 Latency(us) 00:22:29.767 [2024-11-20T09:16:08.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.767 [2024-11-20T09:16:08.686Z] =================================================================================================================== 00:22:29.767 [2024-11-20T09:16:08.686Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:29.767 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97690 00:22:30.025 09:16:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.284 [2024-11-20 09:16:09.109996] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97895 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97895 /var/tmp/bdevperf.sock 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97895 ']' 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.284 09:16:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:30.284 [2024-11-20 09:16:09.188105] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:22:30.284 [2024-11-20 09:16:09.188219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97895 ] 00:22:30.541 [2024-11-20 09:16:09.336322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.541 [2024-11-20 09:16:09.395412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.476 09:16:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.476 09:16:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:31.476 09:16:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:31.734 09:16:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:31.993 NVMe0n1 00:22:31.993 09:16:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.993 09:16:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97944 00:22:31.993 09:16:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:31.993 Running I/O for 10 seconds... 00:22:32.929 09:16:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.192 9068.00 IOPS, 35.42 MiB/s [2024-11-20T09:16:12.111Z] [2024-11-20 09:16:12.012435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.192 [2024-11-20 09:16:12.012669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.012993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86d9a0 is same with the state(6) to be set 00:22:33.193 [2024-11-20 09:16:12.013811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.013843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.013864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.013877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.013889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.013899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.013911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.013932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.013941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.013963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.013974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.013986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.013995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.014006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.014015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.014026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.014035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.014046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.014055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.014066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.193 [2024-11-20 09:16:12.014076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.193 [2024-11-20 09:16:12.014087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.194 [2024-11-20 09:16:12.014836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.194 [2024-11-20 09:16:12.014862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.194 [2024-11-20 09:16:12.014883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.194 [2024-11-20 09:16:12.014903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.194 [2024-11-20 09:16:12.014923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.194 [2024-11-20 09:16:12.014934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.194 [2024-11-20 09:16:12.014943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.014954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.014964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.014975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.014984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.014995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.195 [2024-11-20 09:16:12.015367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:84208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.195 [2024-11-20 09:16:12.015736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.195 [2024-11-20 09:16:12.015748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.196 [2024-11-20 09:16:12.015757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.196 [2024-11-20 09:16:12.015803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.196 [2024-11-20 09:16:12.015829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.196 [2024-11-20 09:16:12.015849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.196 [2024-11-20 09:16:12.015868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.015893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.015913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.015933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.015954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.015974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.015984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.015993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.196 [2024-11-20 09:16:12.016621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.196 [2024-11-20 09:16:12.016693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84648 len:8 PRP1 0x0 PRP2 0x0 00:22:33.196 [2024-11-20 09:16:12.016707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.196 [2024-11-20 09:16:12.016721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:33.196 [2024-11-20 09:16:12.016729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:33.196 [2024-11-20 09:16:12.016736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84656 len:8 PRP1 0x0 PRP2 0x0 00:22:33.196 [2024-11-20 09:16:12.016749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.197 [2024-11-20 09:16:12.017108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:33.197 [2024-11-20 09:16:12.017208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:33.197 [2024-11-20 09:16:12.017329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.197 [2024-11-20 09:16:12.017353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e6f50 with addr=10.0.0.2, port=4420 00:22:33.197 [2024-11-20 09:16:12.017364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e6f50 is same with the state(6) to be set 00:22:33.197 [2024-11-20 09:16:12.017383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:33.197 [2024-11-20 09:16:12.017399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:33.197 [2024-11-20 09:16:12.017411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:33.197 [2024-11-20 09:16:12.017427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:33.197 [2024-11-20 09:16:12.017443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:33.197 [2024-11-20 09:16:12.017470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:33.197 09:16:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:34.132 5227.50 IOPS, 20.42 MiB/s [2024-11-20T09:16:13.051Z] [2024-11-20 09:16:13.017597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:34.132 [2024-11-20 09:16:13.017668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e6f50 with addr=10.0.0.2, port=4420 00:22:34.132 [2024-11-20 09:16:13.017685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e6f50 is same with the state(6) to be set 00:22:34.132 [2024-11-20 09:16:13.017709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:34.132 [2024-11-20 09:16:13.017729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:34.132 [2024-11-20 09:16:13.017738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:34.132 [2024-11-20 09:16:13.017749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:34.132 [2024-11-20 09:16:13.017761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:34.132 [2024-11-20 09:16:13.017785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:34.391 09:16:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:34.391 [2024-11-20 09:16:13.301325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:34.649 09:16:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97944 00:22:35.216 3485.00 IOPS, 13.61 MiB/s [2024-11-20T09:16:14.135Z] [2024-11-20 09:16:14.034382] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:37.102 2613.75 IOPS, 10.21 MiB/s [2024-11-20T09:16:16.956Z] 3615.40 IOPS, 14.12 MiB/s [2024-11-20T09:16:17.892Z] 4513.50 IOPS, 17.63 MiB/s [2024-11-20T09:16:19.269Z] 5124.29 IOPS, 20.02 MiB/s [2024-11-20T09:16:20.205Z] 5596.00 IOPS, 21.86 MiB/s [2024-11-20T09:16:21.143Z] 5975.11 IOPS, 23.34 MiB/s [2024-11-20T09:16:21.143Z] 6237.50 IOPS, 24.37 MiB/s 00:22:42.224 Latency(us) 00:22:42.224 [2024-11-20T09:16:21.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.224 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.224 Verification LBA range: start 0x0 length 0x4000 00:22:42.224 NVMe0n1 : 10.01 6243.64 24.39 0.00 0.00 20455.08 2115.03 3019898.88 00:22:42.224 [2024-11-20T09:16:21.143Z] =================================================================================================================== 00:22:42.224 [2024-11-20T09:16:21.143Z] Total : 6243.64 24.39 0.00 0.00 20455.08 2115.03 3019898.88 00:22:42.224 { 00:22:42.224 "results": [ 00:22:42.224 { 00:22:42.224 "job": "NVMe0n1", 00:22:42.224 "core_mask": "0x4", 00:22:42.224 "workload": "verify", 00:22:42.224 "status": "finished", 00:22:42.224 "verify_range": { 00:22:42.224 "start": 0, 00:22:42.224 "length": 16384 00:22:42.224 }, 00:22:42.224 "queue_depth": 128, 00:22:42.224 "io_size": 4096, 00:22:42.224 "runtime": 10.010671, 00:22:42.224 "iops": 6243.637414514971, 00:22:42.224 "mibps": 24.389208650449106, 00:22:42.224 "io_failed": 0, 00:22:42.224 "io_timeout": 0, 00:22:42.224 "avg_latency_us": 20455.076187353916, 00:22:42.224 "min_latency_us": 2115.0254545454545, 00:22:42.224 "max_latency_us": 3019898.88 00:22:42.224 } 00:22:42.224 ], 00:22:42.224 "core_count": 1 00:22:42.224 } 00:22:42.224 09:16:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98061 00:22:42.225 09:16:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.225 09:16:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:42.225 Running I/O for 10 seconds... 00:22:43.160 09:16:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.430 9313.00 IOPS, 36.38 MiB/s [2024-11-20T09:16:22.349Z] [2024-11-20 09:16:22.175519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.430 [2024-11-20 09:16:22.175586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.430 [2024-11-20 09:16:22.175607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.430 [2024-11-20 09:16:22.175616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.430 [2024-11-20 09:16:22.175624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.430 [2024-11-20 09:16:22.175633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.430 [2024-11-20 09:16:22.175642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.175995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.176003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.176011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.176019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.176027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.176035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.176044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86be90 is same with the state(6) to be set 00:22:43.431 [2024-11-20 09:16:22.176879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.176912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.176935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.176946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.176958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.176968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.176979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.176989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.431 [2024-11-20 09:16:22.177191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.431 [2024-11-20 09:16:22.177202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-20 09:16:22.177211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-20 09:16:22.177232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-20 09:16:22.177255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-20 09:16:22.177275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-20 09:16:22.177296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-20 09:16:22.177316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.432 [2024-11-20 09:16:22.177337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.177982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.177996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.178006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.178016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.178026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.432 [2024-11-20 09:16:22.178036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.432 [2024-11-20 09:16:22.178046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.433 [2024-11-20 09:16:22.178619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.433 [2024-11-20 09:16:22.178859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.433 [2024-11-20 09:16:22.178870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:43.434 [2024-11-20 09:16:22.178879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.178890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.178900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.178911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.178920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.178930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.178940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.178951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.178961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.178972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.178981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.178992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.434 [2024-11-20 09:16:22.179543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.434 [2024-11-20 09:16:22.179590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86288 len:8 PRP1 0x0 PRP2 0x0 00:22:43.434 [2024-11-20 09:16:22.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:43.434 [2024-11-20 09:16:22.179620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:43.434 [2024-11-20 09:16:22.179628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:86296 len:8 PRP1 0x0 PRP2 0x0 00:22:43.434 [2024-11-20 09:16:22.179637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.434 [2024-11-20 09:16:22.179934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:43.434 [2024-11-20 09:16:22.180017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:43.434 [2024-11-20 09:16:22.180124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.434 [2024-11-20 09:16:22.180146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e6f50 with addr=10.0.0.2, port=4420 00:22:43.434 [2024-11-20 09:16:22.180158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e6f50 is same with the state(6) to be set 00:22:43.434 [2024-11-20 09:16:22.180176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:43.434 [2024-11-20 09:16:22.180192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:43.434 [2024-11-20 09:16:22.180201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:43.434 [2024-11-20 09:16:22.180212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:43.435 [2024-11-20 09:16:22.180223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:43.435 [2024-11-20 09:16:22.180233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:43.435 09:16:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:44.382 5330.00 IOPS, 20.82 MiB/s [2024-11-20T09:16:23.301Z] [2024-11-20 09:16:23.180360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.383 [2024-11-20 09:16:23.180432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e6f50 with addr=10.0.0.2, port=4420 00:22:44.383 [2024-11-20 09:16:23.180464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e6f50 is same with the state(6) to be set 00:22:44.383 [2024-11-20 09:16:23.180505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:44.383 [2024-11-20 09:16:23.180525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:44.383 [2024-11-20 09:16:23.180535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:44.383 [2024-11-20 09:16:23.180547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:44.383 [2024-11-20 09:16:23.180558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:44.383 [2024-11-20 09:16:23.180570] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:45.318 3553.33 IOPS, 13.88 MiB/s [2024-11-20T09:16:24.237Z] [2024-11-20 09:16:24.180701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.318 [2024-11-20 09:16:24.180768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e6f50 with addr=10.0.0.2, port=4420 00:22:45.319 [2024-11-20 09:16:24.180797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e6f50 is same with the state(6) to be set 00:22:45.319 [2024-11-20 09:16:24.180837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:45.319 [2024-11-20 09:16:24.180856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:45.319 [2024-11-20 09:16:24.180866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:45.319 [2024-11-20 09:16:24.180877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:45.319 [2024-11-20 09:16:24.180888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:45.319 [2024-11-20 09:16:24.180898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:46.513 2665.00 IOPS, 10.41 MiB/s [2024-11-20T09:16:25.432Z] [2024-11-20 09:16:25.184625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:46.513 [2024-11-20 09:16:25.184702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4e6f50 with addr=10.0.0.2, port=4420 00:22:46.513 [2024-11-20 09:16:25.184719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4e6f50 is same with the state(6) to be set 00:22:46.513 [2024-11-20 09:16:25.184998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4e6f50 (9): Bad file descriptor 00:22:46.513 [2024-11-20 09:16:25.185258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:46.513 [2024-11-20 09:16:25.185272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:46.513 [2024-11-20 09:16:25.185283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:46.513 [2024-11-20 09:16:25.185293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:46.513 [2024-11-20 09:16:25.185305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:46.513 09:16:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.772 [2024-11-20 09:16:25.478817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.772 09:16:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 98061 00:22:47.340 2132.00 IOPS, 8.33 MiB/s [2024-11-20T09:16:26.259Z] [2024-11-20 09:16:26.211881] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:49.212 3044.50 IOPS, 11.89 MiB/s [2024-11-20T09:16:29.067Z] 3955.43 IOPS, 15.45 MiB/s [2024-11-20T09:16:30.442Z] 4652.62 IOPS, 18.17 MiB/s [2024-11-20T09:16:31.458Z] 5191.44 IOPS, 20.28 MiB/s [2024-11-20T09:16:31.458Z] 5628.80 IOPS, 21.99 MiB/s 00:22:52.539 Latency(us) 00:22:52.540 [2024-11-20T09:16:31.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.540 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.540 Verification LBA range: start 0x0 length 0x4000 00:22:52.540 NVMe0n1 : 10.01 5633.14 22.00 3636.36 0.00 13781.21 949.53 3019898.88 00:22:52.540 [2024-11-20T09:16:31.459Z] =================================================================================================================== 00:22:52.540 [2024-11-20T09:16:31.459Z] Total : 5633.14 22.00 3636.36 0.00 13781.21 0.00 3019898.88 00:22:52.540 { 00:22:52.540 "results": [ 00:22:52.540 { 00:22:52.540 "job": "NVMe0n1", 00:22:52.540 "core_mask": "0x4", 00:22:52.540 "workload": "verify", 00:22:52.540 "status": "finished", 00:22:52.540 "verify_range": { 00:22:52.540 "start": 0, 00:22:52.540 "length": 16384 00:22:52.540 }, 00:22:52.540 "queue_depth": 128, 00:22:52.540 "io_size": 4096, 00:22:52.540 "runtime": 10.005069, 00:22:52.540 "iops": 5633.144559023031, 00:22:52.540 "mibps": 22.004470933683717, 00:22:52.540 "io_failed": 36382, 00:22:52.540 "io_timeout": 0, 00:22:52.540 "avg_latency_us": 13781.206307233557, 00:22:52.540 "min_latency_us": 949.5272727272727, 00:22:52.540 "max_latency_us": 3019898.88 00:22:52.540 } 00:22:52.540 ], 00:22:52.540 "core_count": 1 00:22:52.540 } 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97895 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97895 ']' 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97895 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97895 00:22:52.540 killing process with pid 97895 00:22:52.540 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.540 00:22:52.540 Latency(us) 00:22:52.540 [2024-11-20T09:16:31.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.540 [2024-11-20T09:16:31.459Z] =================================================================================================================== 00:22:52.540 [2024-11-20T09:16:31.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97895' 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97895 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97895 00:22:52.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98182 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98182 /var/tmp/bdevperf.sock 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 98182 ']' 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.540 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:52.540 [2024-11-20 09:16:31.338141] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:22:52.540 [2024-11-20 09:16:31.338249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98182 ] 00:22:52.799 [2024-11-20 09:16:31.485469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.799 [2024-11-20 09:16:31.541048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.799 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.799 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:52.799 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98195 00:22:52.799 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98182 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:52.799 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:53.057 09:16:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:53.624 NVMe0n1 00:22:53.624 09:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98250 00:22:53.624 09:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.624 09:16:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:53.624 Running I/O for 10 seconds... 00:22:54.561 09:16:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.824 17509.00 IOPS, 68.39 MiB/s [2024-11-20T09:16:33.743Z] [2024-11-20 09:16:33.539956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.824 [2024-11-20 09:16:33.540614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.540994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x86ebe0 is same with the state(6) to be set 00:22:54.825 [2024-11-20 09:16:33.541486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.825 [2024-11-20 09:16:33.541778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.825 [2024-11-20 09:16:33.541788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.541984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.541995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:65408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:27440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.826 [2024-11-20 09:16:33.542606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.826 [2024-11-20 09:16:33.542617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.542988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.542999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.827 [2024-11-20 09:16:33.543418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.827 [2024-11-20 09:16:33.543426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.828 [2024-11-20 09:16:33.543446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62984 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74696 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16712 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111464 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543611] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48744 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:48472 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49336 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54872 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103160 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32232 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66936 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20920 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543954] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.543968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42480 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.543977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.543986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.543992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.544000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50736 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.544009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.544017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.544024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.544032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38104 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.544041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.544054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.544061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.544069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92976 len:8 PRP1 0x0 PRP2 0x0 00:22:54.828 [2024-11-20 09:16:33.544078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.828 [2024-11-20 09:16:33.544092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.828 [2024-11-20 09:16:33.544099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.828 [2024-11-20 09:16:33.544107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128296 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.544115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.544124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.544131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.544139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45392 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.544147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.544156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.544163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.544171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44816 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.544179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.544188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.544197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.544205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15304 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.544213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.544222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.544229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.544237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124816 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.544245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.544254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.544261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.559964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12168 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92320 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76672 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560150] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67240 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52760 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107328 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40384 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85480 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.829 [2024-11-20 09:16:33.560348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.829 [2024-11-20 09:16:33.560356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94704 len:8 PRP1 0x0 PRP2 0x0 00:22:54.829 [2024-11-20 09:16:33.560364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.829 [2024-11-20 09:16:33.560548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.829 [2024-11-20 09:16:33.560568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.829 [2024-11-20 09:16:33.560586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.829 [2024-11-20 09:16:33.560604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.829 [2024-11-20 09:16:33.560613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a7f50 is same with the state(6) to be set 00:22:54.829 [2024-11-20 09:16:33.560888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:54.829 [2024-11-20 09:16:33.560915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a7f50 (9): Bad file descriptor 00:22:54.829 [2024-11-20 09:16:33.561026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.829 [2024-11-20 09:16:33.561049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a7f50 with addr=10.0.0.2, port=4420 00:22:54.829 [2024-11-20 09:16:33.561060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a7f50 is same with the state(6) to be set 00:22:54.829 [2024-11-20 09:16:33.561078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a7f50 (9): Bad file descriptor 00:22:54.829 [2024-11-20 09:16:33.561094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:54.829 [2024-11-20 09:16:33.561103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:54.829 [2024-11-20 09:16:33.561113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:54.829 [2024-11-20 09:16:33.561123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:54.829 [2024-11-20 09:16:33.561133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:54.829 09:16:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 98250 00:22:56.703 10151.50 IOPS, 39.65 MiB/s [2024-11-20T09:16:35.622Z] 6767.67 IOPS, 26.44 MiB/s [2024-11-20T09:16:35.622Z] [2024-11-20 09:16:35.561504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.703 [2024-11-20 09:16:35.561586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a7f50 with addr=10.0.0.2, port=4420 00:22:56.703 [2024-11-20 09:16:35.561608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a7f50 is same with the state(6) to be set 00:22:56.703 [2024-11-20 09:16:35.561632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a7f50 (9): Bad file descriptor 00:22:56.703 [2024-11-20 09:16:35.561652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:56.703 [2024-11-20 09:16:35.561663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:56.704 [2024-11-20 09:16:35.561674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:56.704 [2024-11-20 09:16:35.561685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:56.704 [2024-11-20 09:16:35.561697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:58.575 5075.75 IOPS, 19.83 MiB/s [2024-11-20T09:16:37.753Z] 4060.60 IOPS, 15.86 MiB/s [2024-11-20T09:16:37.753Z] [2024-11-20 09:16:37.561950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.834 [2024-11-20 09:16:37.562043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a7f50 with addr=10.0.0.2, port=4420 00:22:58.834 [2024-11-20 09:16:37.562061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a7f50 is same with the state(6) to be set 00:22:58.834 [2024-11-20 09:16:37.562088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20a7f50 (9): Bad file descriptor 00:22:58.834 [2024-11-20 09:16:37.562122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:58.834 [2024-11-20 09:16:37.562134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:58.834 [2024-11-20 09:16:37.562145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:58.834 [2024-11-20 09:16:37.562160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:58.834 [2024-11-20 09:16:37.562171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:00.741 3383.83 IOPS, 13.22 MiB/s [2024-11-20T09:16:39.660Z] 2900.43 IOPS, 11.33 MiB/s [2024-11-20T09:16:39.660Z] [2024-11-20 09:16:39.562281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:00.741 [2024-11-20 09:16:39.562337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:00.741 [2024-11-20 09:16:39.562350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:00.741 [2024-11-20 09:16:39.562360] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:00.741 [2024-11-20 09:16:39.562372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:01.706 2537.88 IOPS, 9.91 MiB/s 00:23:01.706 Latency(us) 00:23:01.706 [2024-11-20T09:16:40.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.706 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:01.706 NVMe0n1 : 8.18 2481.59 9.69 15.65 0.00 51304.09 3932.16 7046430.72 00:23:01.706 [2024-11-20T09:16:40.625Z] =================================================================================================================== 00:23:01.706 [2024-11-20T09:16:40.625Z] Total : 2481.59 9.69 15.65 0.00 51304.09 3932.16 7046430.72 00:23:01.706 { 00:23:01.706 "results": [ 00:23:01.706 { 00:23:01.706 "job": "NVMe0n1", 00:23:01.706 "core_mask": "0x4", 00:23:01.706 "workload": "randread", 00:23:01.706 "status": "finished", 00:23:01.706 "queue_depth": 128, 00:23:01.706 "io_size": 4096, 00:23:01.706 "runtime": 8.181463, 00:23:01.706 "iops": 2481.5855061619177, 00:23:01.706 "mibps": 9.693693383444991, 00:23:01.706 "io_failed": 128, 00:23:01.706 "io_timeout": 0, 00:23:01.706 "avg_latency_us": 51304.09013468838, 00:23:01.706 "min_latency_us": 3932.16, 00:23:01.706 "max_latency_us": 7046430.72 00:23:01.706 } 00:23:01.706 ], 00:23:01.706 "core_count": 1 00:23:01.706 } 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:01.706 Attaching 5 probes... 00:23:01.706 1353.675536: reset bdev controller NVMe0 00:23:01.706 1353.750621: reconnect bdev controller NVMe0 00:23:01.706 3354.151443: reconnect delay bdev controller NVMe0 00:23:01.706 3354.172347: reconnect bdev controller NVMe0 00:23:01.706 5354.599553: reconnect delay bdev controller NVMe0 00:23:01.706 5354.619500: reconnect bdev controller NVMe0 00:23:01.706 7355.051325: reconnect delay bdev controller NVMe0 00:23:01.706 7355.072992: reconnect bdev controller NVMe0 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 98195 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98182 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 98182 ']' 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 98182 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.706 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98182 00:23:01.965 killing process with pid 98182 00:23:01.965 Received shutdown signal, test time was about 8.248747 seconds 00:23:01.965 00:23:01.965 Latency(us) 00:23:01.965 [2024-11-20T09:16:40.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.965 [2024-11-20T09:16:40.884Z] =================================================================================================================== 00:23:01.965 [2024-11-20T09:16:40.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:01.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:01.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98182' 00:23:01.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 98182 00:23:01.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 98182 00:23:01.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@99 -- # sync 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@102 -- # set +e 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:02.224 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:02.483 rmmod nvme_tcp 00:23:02.483 rmmod nvme_fabrics 00:23:02.483 rmmod nvme_keyring 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@106 -- # set -e 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@107 -- # return 0 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@336 -- # '[' -n 97612 ']' 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@337 -- # killprocess 97612 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97612 ']' 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97612 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97612 00:23:02.483 killing process with pid 97612 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.483 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97612' 00:23:02.484 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97612 00:23:02.484 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97612 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@342 -- # nvmf_fini 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@254 -- # local dev 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@261 -- # continue 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # _dev=0 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@41 -- # dev_map=() 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/setup.sh@274 -- # iptr 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-save 00:23:02.742 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:02.743 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@548 -- # iptables-restore 00:23:02.743 00:23:02.743 real 0m46.679s 00:23:02.743 user 2m17.680s 00:23:02.743 sys 0m4.957s 00:23:02.743 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.743 09:16:41 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:02.743 ************************************ 00:23:02.743 END TEST nvmf_timeout 00:23:02.743 ************************************ 00:23:02.743 09:16:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@49 -- # [[ virt == phy ]] 00:23:02.743 09:16:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:02.743 00:23:02.743 real 5m41.039s 00:23:02.743 user 14m39.956s 00:23:02.743 sys 1m4.225s 00:23:02.743 09:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:02.743 09:16:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.743 ************************************ 00:23:02.743 END TEST nvmf_host 00:23:02.743 ************************************ 00:23:03.002 09:16:41 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ tcp = \t\c\p ]] 00:23:03.002 09:16:41 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ 0 -eq 0 ]] 00:23:03.002 09:16:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:03.002 09:16:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:03.002 09:16:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.002 09:16:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:03.002 ************************************ 00:23:03.002 START TEST nvmf_target_core_interrupt_mode 00:23:03.002 ************************************ 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:03.002 * Looking for test storage... 00:23:03.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:03.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.002 --rc genhtml_branch_coverage=1 00:23:03.002 --rc genhtml_function_coverage=1 00:23:03.002 --rc genhtml_legend=1 00:23:03.002 --rc geninfo_all_blocks=1 00:23:03.002 --rc geninfo_unexecuted_blocks=1 00:23:03.002 00:23:03.002 ' 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:03.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.002 --rc genhtml_branch_coverage=1 00:23:03.002 --rc genhtml_function_coverage=1 00:23:03.002 --rc genhtml_legend=1 00:23:03.002 --rc geninfo_all_blocks=1 00:23:03.002 --rc geninfo_unexecuted_blocks=1 00:23:03.002 00:23:03.002 ' 00:23:03.002 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:03.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.002 --rc genhtml_branch_coverage=1 00:23:03.002 --rc genhtml_function_coverage=1 00:23:03.002 --rc genhtml_legend=1 00:23:03.003 --rc geninfo_all_blocks=1 00:23:03.003 --rc geninfo_unexecuted_blocks=1 00:23:03.003 00:23:03.003 ' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:03.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.003 --rc genhtml_branch_coverage=1 00:23:03.003 --rc genhtml_function_coverage=1 00:23:03.003 --rc genhtml_legend=1 00:23:03.003 --rc geninfo_all_blocks=1 00:23:03.003 --rc geninfo_unexecuted_blocks=1 00:23:03.003 00:23:03.003 ' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:03.003 ************************************ 00:23:03.003 START TEST nvmf_abort 00:23:03.003 ************************************ 00:23:03.003 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:03.262 * Looking for test storage... 00:23:03.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:03.262 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:03.262 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:23:03.262 09:16:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:23:03.262 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:03.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.263 --rc genhtml_branch_coverage=1 00:23:03.263 --rc genhtml_function_coverage=1 00:23:03.263 --rc genhtml_legend=1 00:23:03.263 --rc geninfo_all_blocks=1 00:23:03.263 --rc geninfo_unexecuted_blocks=1 00:23:03.263 00:23:03.263 ' 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:03.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.263 --rc genhtml_branch_coverage=1 00:23:03.263 --rc genhtml_function_coverage=1 00:23:03.263 --rc genhtml_legend=1 00:23:03.263 --rc geninfo_all_blocks=1 00:23:03.263 --rc geninfo_unexecuted_blocks=1 00:23:03.263 00:23:03.263 ' 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:03.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.263 --rc genhtml_branch_coverage=1 00:23:03.263 --rc genhtml_function_coverage=1 00:23:03.263 --rc genhtml_legend=1 00:23:03.263 --rc geninfo_all_blocks=1 00:23:03.263 --rc geninfo_unexecuted_blocks=1 00:23:03.263 00:23:03.263 ' 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:03.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.263 --rc genhtml_branch_coverage=1 00:23:03.263 --rc genhtml_function_coverage=1 00:23:03.263 --rc genhtml_legend=1 00:23:03.263 --rc geninfo_all_blocks=1 00:23:03.263 --rc geninfo_unexecuted_blocks=1 00:23:03.263 00:23:03.263 ' 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:23:03.263 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@280 -- # nvmf_veth_init 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@223 -- # create_target_ns 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # create_main_bridge 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@105 -- # delete_main_bridge 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator0 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:23:03.264 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target0 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0 up 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target0_br 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target0 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:23:03.265 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:23:03.525 10.0.0.1 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:23:03.525 10.0.0.2 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator0 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target0_br 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:03.525 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up initiator1 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@151 -- # set_up target1 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1 up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # set_up target1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns target1 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772163 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:23:03.526 10.0.0.3 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772164 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:23:03.526 10.0.0.4 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up initiator1 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@129 -- # set_up target1_br 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:23:03.526 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 2 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:03.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:23:03.527 00:23:03.527 --- 10.0.0.1 ping statistics --- 00:23:03.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.527 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:03.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:23:03.527 00:23:03.527 --- 10.0.0.2 ping statistics --- 00:23:03.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.527 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:03.527 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:23:03.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:03.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:23:03.787 00:23:03.787 --- 10.0.0.3 ping statistics --- 00:23:03.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.787 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:23:03.787 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:03.787 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:23:03.787 00:23:03.787 --- 10.0.0.4 ping statistics --- 00:23:03.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.787 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # return 0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo initiator1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target0 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=target1 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:03.787 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=98660 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 98660 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 98660 ']' 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.788 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:03.788 [2024-11-20 09:16:42.620146] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:03.788 [2024-11-20 09:16:42.621440] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:23:03.788 [2024-11-20 09:16:42.621531] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.046 [2024-11-20 09:16:42.769804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:04.046 [2024-11-20 09:16:42.826870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.046 [2024-11-20 09:16:42.826919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.046 [2024-11-20 09:16:42.826930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.046 [2024-11-20 09:16:42.826939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.046 [2024-11-20 09:16:42.826946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.046 [2024-11-20 09:16:42.828019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.046 [2024-11-20 09:16:42.828150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:04.046 [2024-11-20 09:16:42.828155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.046 [2024-11-20 09:16:42.922332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:04.046 [2024-11-20 09:16:42.922528] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:04.046 [2024-11-20 09:16:42.922744] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:04.046 [2024-11-20 09:16:42.923383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:04.046 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.046 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:23:04.046 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:04.046 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:04.046 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.304 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.304 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:23:04.304 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.304 09:16:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.304 [2024-11-20 09:16:43.005124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.304 Malloc0 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.304 Delay0 00:23:04.304 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.305 [2024-11-20 09:16:43.085183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.305 09:16:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:23:04.563 [2024-11-20 09:16:43.290875] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:06.465 Initializing NVMe Controllers 00:23:06.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:23:06.465 controller IO queue size 128 less than required 00:23:06.465 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:23:06.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:06.465 Initialization complete. Launching workers. 00:23:06.465 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27224 00:23:06.465 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27281, failed to submit 66 00:23:06.465 success 27224, unsuccessful 57, failed 0 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:06.465 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:06.724 rmmod nvme_tcp 00:23:06.724 rmmod nvme_fabrics 00:23:06.724 rmmod nvme_keyring 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 98660 ']' 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 98660 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 98660 ']' 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 98660 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98660 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:06.724 killing process with pid 98660 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98660' 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 98660 00:23:06.724 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 98660 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # continue 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:23:06.984 ************************************ 00:23:06.984 END TEST nvmf_abort 00:23:06.984 ************************************ 00:23:06.984 00:23:06.984 real 0m3.967s 00:23:06.984 user 0m8.994s 00:23:06.984 sys 0m1.573s 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.984 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:07.244 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:07.244 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:07.244 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.244 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:07.244 ************************************ 00:23:07.244 START TEST nvmf_ns_hotplug_stress 00:23:07.244 ************************************ 00:23:07.244 09:16:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:07.244 * Looking for test storage... 00:23:07.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:07.244 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:07.244 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:23:07.244 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:07.244 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:07.244 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:07.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.245 --rc genhtml_branch_coverage=1 00:23:07.245 --rc genhtml_function_coverage=1 00:23:07.245 --rc genhtml_legend=1 00:23:07.245 --rc geninfo_all_blocks=1 00:23:07.245 --rc geninfo_unexecuted_blocks=1 00:23:07.245 00:23:07.245 ' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:07.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.245 --rc genhtml_branch_coverage=1 00:23:07.245 --rc genhtml_function_coverage=1 00:23:07.245 --rc genhtml_legend=1 00:23:07.245 --rc geninfo_all_blocks=1 00:23:07.245 --rc geninfo_unexecuted_blocks=1 00:23:07.245 00:23:07.245 ' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:07.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.245 --rc genhtml_branch_coverage=1 00:23:07.245 --rc genhtml_function_coverage=1 00:23:07.245 --rc genhtml_legend=1 00:23:07.245 --rc geninfo_all_blocks=1 00:23:07.245 --rc geninfo_unexecuted_blocks=1 00:23:07.245 00:23:07.245 ' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:07.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.245 --rc genhtml_branch_coverage=1 00:23:07.245 --rc genhtml_function_coverage=1 00:23:07.245 --rc genhtml_legend=1 00:23:07.245 --rc geninfo_all_blocks=1 00:23:07.245 --rc geninfo_unexecuted_blocks=1 00:23:07.245 00:23:07.245 ' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.245 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@280 -- # nvmf_veth_init 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@223 -- # create_target_ns 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # create_main_bridge 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@105 -- # delete_main_bridge 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:23:07.246 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator0 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target0 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0 up 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target0_br 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target0 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:07.506 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:23:07.507 10.0.0.1 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:07.507 10.0.0.2 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator0 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target0_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up initiator1 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@151 -- # set_up target1 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1 up 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # set_up target1_br 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.507 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns target1 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772163 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:23:07.508 10.0.0.3 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772164 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:23:07.508 10.0.0.4 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up initiator1 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@129 -- # set_up target1_br 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 2 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:07.508 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:07.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:23:07.770 00:23:07.770 --- 10.0.0.1 ping statistics --- 00:23:07.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.770 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:07.770 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:07.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:23:07.770 00:23:07.770 --- 10.0.0.2 ping statistics --- 00:23:07.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.770 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:23:07.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:07.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:07.771 00:23:07.771 --- 10.0.0.3 ping statistics --- 00:23:07.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.771 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:23:07.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:07.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.083 ms 00:23:07.771 00:23:07.771 --- 10.0.0.4 ping statistics --- 00:23:07.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.771 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # return 0 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator0 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.771 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target0 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target0 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo target1 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=target1 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=98938 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 98938 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 98938 ']' 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.772 09:16:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:07.772 [2024-11-20 09:16:46.645833] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:07.772 [2024-11-20 09:16:46.647130] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:23:07.772 [2024-11-20 09:16:46.647222] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:08.032 [2024-11-20 09:16:46.806917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:08.032 [2024-11-20 09:16:46.868294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:08.032 [2024-11-20 09:16:46.868445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:08.032 [2024-11-20 09:16:46.868465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:08.032 [2024-11-20 09:16:46.868476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:08.032 [2024-11-20 09:16:46.868486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:08.032 [2024-11-20 09:16:46.869683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.032 [2024-11-20 09:16:46.869803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:08.032 [2024-11-20 09:16:46.869805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.318 [2024-11-20 09:16:46.969531] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:08.318 [2024-11-20 09:16:46.970149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:08.318 [2024-11-20 09:16:46.970279] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:08.318 [2024-11-20 09:16:46.970500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:23:08.318 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:08.576 [2024-11-20 09:16:47.347067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:08.576 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:08.834 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:09.093 [2024-11-20 09:16:47.915557] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.093 09:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:09.351 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:23:09.610 Malloc0 00:23:09.610 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:10.177 Delay0 00:23:10.177 09:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:10.434 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:23:10.434 NULL1 00:23:10.692 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:23:10.692 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=99062 00:23:10.692 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:23:10.692 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:10.692 09:16:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:12.067 Read completed with error (sct=0, sc=11) 00:23:12.067 09:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:12.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:12.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:12.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:12.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:12.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:12.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:12.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:12.326 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:23:12.326 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:23:12.584 true 00:23:12.584 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:12.584 09:16:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:13.521 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:13.521 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:23:13.521 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:23:13.779 true 00:23:13.779 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:13.779 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:14.037 09:16:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:14.602 09:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:23:14.602 09:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:23:14.602 true 00:23:14.602 09:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:14.602 09:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:15.168 09:16:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:15.168 09:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:23:15.168 09:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:23:15.426 true 00:23:15.684 09:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:15.684 09:16:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:16.248 09:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:16.815 09:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:23:16.815 09:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:23:16.815 true 00:23:17.074 09:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:17.074 09:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:17.346 09:16:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:17.644 09:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:23:17.644 09:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:23:17.644 true 00:23:17.902 09:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:17.902 09:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:18.161 09:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:18.420 09:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:23:18.420 09:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:23:18.678 true 00:23:18.678 09:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:18.678 09:16:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:19.615 09:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:19.615 09:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:23:19.615 09:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:23:19.874 true 00:23:19.874 09:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:19.874 09:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:20.133 09:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:20.391 09:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:23:20.391 09:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:23:20.649 true 00:23:20.649 09:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:20.649 09:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:20.907 09:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:21.166 09:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:23:21.166 09:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:23:21.424 true 00:23:21.424 09:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:21.424 09:17:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:22.355 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:22.614 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:23:22.614 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:23:22.872 true 00:23:22.872 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:22.872 09:17:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:23.130 09:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:23.389 09:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:23:23.389 09:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:23:23.647 true 00:23:23.647 09:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:23.647 09:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:23.906 09:17:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:24.165 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:23:24.165 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:23:24.424 true 00:23:24.682 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:24.682 09:17:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:25.615 09:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:25.615 09:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:23:25.615 09:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:23:25.926 true 00:23:25.926 09:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:25.926 09:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:26.185 09:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:26.443 09:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:23:26.443 09:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:23:26.701 true 00:23:26.701 09:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:26.701 09:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:26.958 09:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:27.216 09:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:23:27.216 09:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:23:27.475 true 00:23:27.475 09:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:27.475 09:17:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:28.410 09:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:28.668 09:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:23:28.668 09:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:23:28.927 true 00:23:28.927 09:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:28.927 09:17:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:29.185 09:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:29.443 09:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:23:29.443 09:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:23:30.009 true 00:23:30.009 09:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:30.009 09:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:30.009 09:17:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:30.266 09:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:23:30.266 09:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:23:30.523 true 00:23:30.523 09:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:30.523 09:17:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:31.453 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:31.453 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:31.711 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:23:31.711 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:23:31.969 true 00:23:31.969 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:31.969 09:17:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:32.227 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:32.485 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:23:32.485 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:23:32.744 true 00:23:32.744 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:32.744 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:33.002 09:17:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:33.261 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:23:33.261 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:23:33.521 true 00:23:33.521 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:33.521 09:17:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:34.459 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:34.718 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:23:34.718 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:23:34.976 true 00:23:34.976 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:34.976 09:17:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:35.235 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:35.493 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:23:35.493 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:23:35.751 true 00:23:35.751 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:35.751 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:36.318 09:17:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:36.318 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:23:36.318 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:23:36.576 true 00:23:36.576 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:36.576 09:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:37.511 09:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:37.770 09:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:23:37.770 09:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:23:38.028 true 00:23:38.028 09:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:38.028 09:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:38.595 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:38.853 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:23:38.853 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:23:39.112 true 00:23:39.112 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:39.112 09:17:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:39.371 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:39.629 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:23:39.629 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:23:39.888 true 00:23:39.888 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:39.888 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:40.146 09:17:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:40.404 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:23:40.404 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:23:40.663 true 00:23:40.663 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:40.663 09:17:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:41.630 Initializing NVMe Controllers 00:23:41.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:41.630 Controller IO queue size 128, less than required. 00:23:41.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.630 Controller IO queue size 128, less than required. 00:23:41.630 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:41.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:41.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:41.630 Initialization complete. Launching workers. 00:23:41.630 ======================================================== 00:23:41.630 Latency(us) 00:23:41.630 Device Information : IOPS MiB/s Average min max 00:23:41.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 326.00 0.16 156449.17 3372.62 1018249.25 00:23:41.630 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7961.87 3.89 16076.17 3614.03 589693.27 00:23:41.630 ======================================================== 00:23:41.630 Total : 8287.87 4.05 21597.69 3372.62 1018249.25 00:23:41.630 00:23:41.630 09:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:41.889 09:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:23:41.889 09:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:23:42.147 true 00:23:42.147 09:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 99062 00:23:42.147 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (99062) - No such process 00:23:42.147 09:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 99062 00:23:42.147 09:17:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:42.406 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:42.664 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:23:42.664 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:23:42.664 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:23:42.664 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:42.664 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:23:42.922 null0 00:23:42.922 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:42.922 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:42.922 09:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:23:43.180 null1 00:23:43.180 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:43.180 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:43.180 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:23:43.466 null2 00:23:43.466 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:43.466 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:43.466 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:23:43.724 null3 00:23:43.724 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:43.724 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:43.724 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:23:43.982 null4 00:23:43.982 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:43.982 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:43.982 09:17:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:23:44.240 null5 00:23:44.498 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:44.498 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:44.498 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:23:44.498 null6 00:23:44.755 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:44.755 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:44.755 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:23:45.014 null7 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.014 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:45.015 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 100102 100104 100105 100107 100109 100110 100113 100116 00:23:45.274 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:45.274 09:17:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:45.274 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:45.274 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:45.274 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:45.274 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:45.274 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:45.274 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.532 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:45.533 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.533 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.533 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:45.791 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.050 09:17:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:46.308 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.566 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:46.824 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.083 09:17:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.341 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.342 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:47.600 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.859 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:47.860 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.860 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.860 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:48.118 09:17:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:48.118 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.379 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:48.637 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.896 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.154 09:17:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:49.154 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.154 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.154 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:49.413 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.672 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.930 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:50.188 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:50.188 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:50.188 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:50.188 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.188 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.188 09:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:50.188 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.188 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.188 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:50.188 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.188 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.188 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:50.455 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.715 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.973 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.973 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.974 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.232 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.232 09:17:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:51.232 rmmod nvme_tcp 00:23:51.232 rmmod nvme_fabrics 00:23:51.232 rmmod nvme_keyring 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 98938 ']' 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 98938 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 98938 ']' 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 98938 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:51.232 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98938 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:51.490 killing process with pid 98938 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98938' 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 98938 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 98938 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:23:51.490 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # continue 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:23:51.749 ************************************ 00:23:51.749 END TEST nvmf_ns_hotplug_stress 00:23:51.749 ************************************ 00:23:51.749 00:23:51.749 real 0m44.611s 00:23:51.749 user 3m20.680s 00:23:51.749 sys 0m19.215s 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:51.749 ************************************ 00:23:51.749 START TEST nvmf_delete_subsystem 00:23:51.749 ************************************ 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:51.749 * Looking for test storage... 00:23:51.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:23:51.749 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.010 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.011 --rc genhtml_branch_coverage=1 00:23:52.011 --rc genhtml_function_coverage=1 00:23:52.011 --rc genhtml_legend=1 00:23:52.011 --rc geninfo_all_blocks=1 00:23:52.011 --rc geninfo_unexecuted_blocks=1 00:23:52.011 00:23:52.011 ' 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.011 --rc genhtml_branch_coverage=1 00:23:52.011 --rc genhtml_function_coverage=1 00:23:52.011 --rc genhtml_legend=1 00:23:52.011 --rc geninfo_all_blocks=1 00:23:52.011 --rc geninfo_unexecuted_blocks=1 00:23:52.011 00:23:52.011 ' 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.011 --rc genhtml_branch_coverage=1 00:23:52.011 --rc genhtml_function_coverage=1 00:23:52.011 --rc genhtml_legend=1 00:23:52.011 --rc geninfo_all_blocks=1 00:23:52.011 --rc geninfo_unexecuted_blocks=1 00:23:52.011 00:23:52.011 ' 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.011 --rc genhtml_branch_coverage=1 00:23:52.011 --rc genhtml_function_coverage=1 00:23:52.011 --rc genhtml_legend=1 00:23:52.011 --rc geninfo_all_blocks=1 00:23:52.011 --rc geninfo_unexecuted_blocks=1 00:23:52.011 00:23:52.011 ' 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:52.011 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@280 -- # nvmf_veth_init 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@223 -- # create_target_ns 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # create_main_bridge 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@105 -- # delete_main_bridge 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator0 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.012 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target0 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0 up 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target0 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:23:52.013 10.0.0.1 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:23:52.013 10.0.0.2 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator0 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target0_br 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:23:52.013 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up initiator1 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:23:52.014 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:23:52.274 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@151 -- # set_up target1 00:23:52.274 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:23:52.274 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.274 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:23:52.274 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1 up 00:23:52.274 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # set_up target1_br 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns target1 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772163 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:23:52.275 10.0.0.3 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772164 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:23:52.275 10.0.0.4 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up initiator1 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:23:52.275 09:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@129 -- # set_up target1_br 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 2 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:52.275 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:52.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:23:52.276 00:23:52.276 --- 10.0.0.1 ping statistics --- 00:23:52.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.276 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:52.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:23:52.276 00:23:52.276 --- 10.0.0.2 ping statistics --- 00:23:52.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.276 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:23:52.276 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:52.276 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:23:52.276 00:23:52.276 --- 10.0.0.3 ping statistics --- 00:23:52.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.276 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.276 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:23:52.277 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:52.277 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.092 ms 00:23:52.277 00:23:52.277 --- 10.0.0.4 ping statistics --- 00:23:52.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.277 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # return 0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo initiator1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=initiator1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target0 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:52.277 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo target1 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=target1 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:52.278 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=101497 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 101497 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 101497 ']' 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:23:52.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.536 09:17:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.536 [2024-11-20 09:17:31.258051] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:52.536 [2024-11-20 09:17:31.259087] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:23:52.536 [2024-11-20 09:17:31.259150] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.536 [2024-11-20 09:17:31.407890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:52.795 [2024-11-20 09:17:31.475325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.795 [2024-11-20 09:17:31.475422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.795 [2024-11-20 09:17:31.475437] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.795 [2024-11-20 09:17:31.475449] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.795 [2024-11-20 09:17:31.475458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.795 [2024-11-20 09:17:31.476739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.795 [2024-11-20 09:17:31.476770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.795 [2024-11-20 09:17:31.584327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:52.795 [2024-11-20 09:17:31.584993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:52.795 [2024-11-20 09:17:31.584995] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 [2024-11-20 09:17:32.338034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 [2024-11-20 09:17:32.366222] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 NULL1 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 Delay0 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=101548 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:53.729 09:17:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:23:53.729 [2024-11-20 09:17:32.574623] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:55.653 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.653 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.653 09:17:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.942 Write completed with error (sct=0, sc=8) 00:23:55.942 starting I/O failed: -6 00:23:55.942 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 [2024-11-20 09:17:34.612144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227ac30 is same with the state(6) to be set 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Read completed with error (sct=0, sc=8) 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 Write completed with error (sct=0, sc=8) 00:23:55.943 starting I/O failed: -6 00:23:55.943 starting I/O failed: -6 00:23:55.943 starting I/O failed: -6 00:23:55.943 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:55.944 starting I/O failed: -6 00:23:56.878 [2024-11-20 09:17:35.590432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2276ee0 is same with the state(6) to be set 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 [2024-11-20 09:17:35.610250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618400d020 is same with the state(6) to be set 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 [2024-11-20 09:17:35.611261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f618400d680 is same with the state(6) to be set 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Read completed with error (sct=0, sc=8) 00:23:56.878 Write completed with error (sct=0, sc=8) 00:23:56.879 [2024-11-20 09:17:35.614340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227dea0 is same with the state(6) to be set 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 Read completed with error (sct=0, sc=8) 00:23:56.879 Write completed with error (sct=0, sc=8) 00:23:56.879 [2024-11-20 09:17:35.614586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x227aa50 is same with the state(6) to be set 00:23:56.879 Initializing NVMe Controllers 00:23:56.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.879 Controller IO queue size 128, less than required. 00:23:56.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:56.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:23:56.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:23:56.879 Initialization complete. Launching workers. 00:23:56.879 ======================================================== 00:23:56.879 Latency(us) 00:23:56.879 Device Information : IOPS MiB/s Average min max 00:23:56.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.06 0.08 886629.36 393.45 1013889.93 00:23:56.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.59 0.08 959588.13 896.61 1016876.39 00:23:56.879 ======================================================== 00:23:56.879 Total : 344.65 0.17 922741.33 393.45 1016876.39 00:23:56.879 00:23:56.879 [2024-11-20 09:17:35.615536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2276ee0 (9): Bad file descriptor 00:23:56.879 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:56.879 09:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.879 09:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:23:56.879 09:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101548 00:23:56.879 09:17:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 101548 00:23:57.445 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (101548) - No such process 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 101548 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 101548 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.445 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 101548 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:57.446 [2024-11-20 09:17:36.142321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=101588 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:23:57.446 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:57.446 [2024-11-20 09:17:36.321870] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.011 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:58.011 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:23:58.011 09:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:58.269 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:58.269 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:23:58.269 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:58.834 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:58.834 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:23:58.834 09:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:59.400 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:59.400 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:23:59.400 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:59.964 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:59.964 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:23:59.964 09:17:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:00.531 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:00.531 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:24:00.531 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:24:00.531 Initializing NVMe Controllers 00:24:00.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.531 Controller IO queue size 128, less than required. 00:24:00.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:24:00.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:24:00.531 Initialization complete. Launching workers. 00:24:00.531 ======================================================== 00:24:00.531 Latency(us) 00:24:00.531 Device Information : IOPS MiB/s Average min max 00:24:00.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003945.99 1000161.29 1012875.66 00:24:00.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005972.85 1000336.65 1015438.58 00:24:00.531 ======================================================== 00:24:00.531 Total : 256.00 0.12 1004959.42 1000161.29 1015438.58 00:24:00.531 00:24:00.789 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:24:00.789 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 101588 00:24:00.789 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (101588) - No such process 00:24:00.789 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 101588 00:24:00.789 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:00.789 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:24:00.789 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:00.789 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:01.048 rmmod nvme_tcp 00:24:01.048 rmmod nvme_fabrics 00:24:01.048 rmmod nvme_keyring 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 101497 ']' 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 101497 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 101497 ']' 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 101497 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101497 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.048 killing process with pid 101497 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101497' 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 101497 00:24:01.048 09:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 101497 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:01.306 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # continue 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:01.307 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:24:01.566 00:24:01.566 real 0m9.655s 00:24:01.566 user 0m24.390s 00:24:01.566 sys 0m2.474s 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:01.566 ************************************ 00:24:01.566 END TEST nvmf_delete_subsystem 00:24:01.566 ************************************ 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:01.566 ************************************ 00:24:01.566 START TEST nvmf_host_management 00:24:01.566 ************************************ 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:01.566 * Looking for test storage... 00:24:01.566 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.566 --rc genhtml_branch_coverage=1 00:24:01.566 --rc genhtml_function_coverage=1 00:24:01.566 --rc genhtml_legend=1 00:24:01.566 --rc geninfo_all_blocks=1 00:24:01.566 --rc geninfo_unexecuted_blocks=1 00:24:01.566 00:24:01.566 ' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.566 --rc genhtml_branch_coverage=1 00:24:01.566 --rc genhtml_function_coverage=1 00:24:01.566 --rc genhtml_legend=1 00:24:01.566 --rc geninfo_all_blocks=1 00:24:01.566 --rc geninfo_unexecuted_blocks=1 00:24:01.566 00:24:01.566 ' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.566 --rc genhtml_branch_coverage=1 00:24:01.566 --rc genhtml_function_coverage=1 00:24:01.566 --rc genhtml_legend=1 00:24:01.566 --rc geninfo_all_blocks=1 00:24:01.566 --rc geninfo_unexecuted_blocks=1 00:24:01.566 00:24:01.566 ' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.566 --rc genhtml_branch_coverage=1 00:24:01.566 --rc genhtml_function_coverage=1 00:24:01.566 --rc genhtml_legend=1 00:24:01.566 --rc geninfo_all_blocks=1 00:24:01.566 --rc geninfo_unexecuted_blocks=1 00:24:01.566 00:24:01.566 ' 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:01.566 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@223 -- # create_target_ns 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:01.826 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:01.827 10.0.0.1 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:01.827 10.0.0.2 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:01.827 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@151 -- # set_up target1 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:01.828 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772163 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:02.086 10.0.0.3 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:02.086 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772164 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:02.087 10.0.0.4 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:02.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:02.087 00:24:02.087 --- 10.0.0.1 ping statistics --- 00:24:02.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.087 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:02.087 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:02.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:24:02.088 00:24:02.088 --- 10.0.0.2 ping statistics --- 00:24:02.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.088 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:02.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:02.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:24:02.088 00:24:02.088 --- 10.0.0.3 ping statistics --- 00:24:02.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.088 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:02.088 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:02.088 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:24:02.088 00:24:02.088 --- 10.0.0.4 ping statistics --- 00:24:02.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.088 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # return 0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator0 00:24:02.088 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo initiator1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target0 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target0 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo target1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=target1 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=101878 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 101878 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101878 ']' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.089 09:17:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.348 [2024-11-20 09:17:41.049487] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:02.348 [2024-11-20 09:17:41.050628] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:02.348 [2024-11-20 09:17:41.050719] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:02.348 [2024-11-20 09:17:41.197670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:02.348 [2024-11-20 09:17:41.255453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:02.348 [2024-11-20 09:17:41.255525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:02.348 [2024-11-20 09:17:41.255537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:02.348 [2024-11-20 09:17:41.255546] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:02.348 [2024-11-20 09:17:41.255553] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:02.348 [2024-11-20 09:17:41.256697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:02.348 [2024-11-20 09:17:41.256809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:02.348 [2024-11-20 09:17:41.256937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:02.348 [2024-11-20 09:17:41.256943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.607 [2024-11-20 09:17:41.351533] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:02.607 [2024-11-20 09:17:41.352047] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:02.607 [2024-11-20 09:17:41.352151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:02.607 [2024-11-20 09:17:41.352304] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:02.607 [2024-11-20 09:17:41.352904] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.607 [2024-11-20 09:17:41.434414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.607 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.607 Malloc0 00:24:02.607 [2024-11-20 09:17:41.522668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=101937 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 101937 /var/tmp/bdevperf.sock 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101937 ']' 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:02.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:02.866 { 00:24:02.866 "params": { 00:24:02.866 "name": "Nvme$subsystem", 00:24:02.866 "trtype": "$TEST_TRANSPORT", 00:24:02.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.866 "adrfam": "ipv4", 00:24:02.866 "trsvcid": "$NVMF_PORT", 00:24:02.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.866 "hdgst": ${hdgst:-false}, 00:24:02.866 "ddgst": ${ddgst:-false} 00:24:02.866 }, 00:24:02.866 "method": "bdev_nvme_attach_controller" 00:24:02.866 } 00:24:02.866 EOF 00:24:02.866 )") 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:24:02.866 09:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:02.866 "params": { 00:24:02.866 "name": "Nvme0", 00:24:02.866 "trtype": "tcp", 00:24:02.866 "traddr": "10.0.0.2", 00:24:02.866 "adrfam": "ipv4", 00:24:02.866 "trsvcid": "4420", 00:24:02.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:02.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:02.866 "hdgst": false, 00:24:02.866 "ddgst": false 00:24:02.866 }, 00:24:02.866 "method": "bdev_nvme_attach_controller" 00:24:02.866 }' 00:24:02.866 [2024-11-20 09:17:41.640241] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:02.866 [2024-11-20 09:17:41.640359] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101937 ] 00:24:03.125 [2024-11-20 09:17:41.793167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.125 [2024-11-20 09:17:41.861624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.383 Running I/O for 10 seconds... 00:24:03.383 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.383 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:03.383 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:03.383 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:24:03.384 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.646 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:03.646 [2024-11-20 09:17:42.504124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.504980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.504989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.646 [2024-11-20 09:17:42.505868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.646 [2024-11-20 09:17:42.505877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.647 [2024-11-20 09:17:42.505899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.647 [2024-11-20 09:17:42.505908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.647 [2024-11-20 09:17:42.505920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:03.647 [2024-11-20 09:17:42.505929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.647 [2024-11-20 09:17:42.506158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.647 [2024-11-20 09:17:42.506178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.647 [2024-11-20 09:17:42.506190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.647 [2024-11-20 09:17:42.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.647 [2024-11-20 09:17:42.506211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.647 [2024-11-20 09:17:42.506220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.647 [2024-11-20 09:17:42.506230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.647 [2024-11-20 09:17:42.506239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.647 [2024-11-20 09:17:42.506249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d1660 is same with the state(6) to be set 00:24:03.647 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.647 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:03.647 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.647 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:03.647 [2024-11-20 09:17:42.507442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.647 task offset: 80128 on job bdev=Nvme0n1 fails 00:24:03.647 00:24:03.647 Latency(us) 00:24:03.647 [2024-11-20T09:17:42.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.647 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.647 Job: Nvme0n1 ended in about 0.45 seconds with error 00:24:03.647 Verification LBA range: start 0x0 length 0x400 00:24:03.647 Nvme0n1 : 0.45 1277.36 79.84 141.93 0.00 43194.88 4140.68 47185.92 00:24:03.647 [2024-11-20T09:17:42.566Z] =================================================================================================================== 00:24:03.647 [2024-11-20T09:17:42.566Z] Total : 1277.36 79.84 141.93 0.00 43194.88 4140.68 47185.92 00:24:03.647 [2024-11-20 09:17:42.509414] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:03.647 [2024-11-20 09:17:42.509446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d1660 (9): Bad file descriptor 00:24:03.647 [2024-11-20 09:17:42.512922] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:03.647 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.647 09:17:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 101937 00:24:05.023 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (101937) - No such process 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:05.023 { 00:24:05.023 "params": { 00:24:05.023 "name": "Nvme$subsystem", 00:24:05.023 "trtype": "$TEST_TRANSPORT", 00:24:05.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.023 "adrfam": "ipv4", 00:24:05.023 "trsvcid": "$NVMF_PORT", 00:24:05.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.023 "hdgst": ${hdgst:-false}, 00:24:05.023 "ddgst": ${ddgst:-false} 00:24:05.023 }, 00:24:05.023 "method": "bdev_nvme_attach_controller" 00:24:05.023 } 00:24:05.023 EOF 00:24:05.023 )") 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:24:05.023 09:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:05.023 "params": { 00:24:05.023 "name": "Nvme0", 00:24:05.023 "trtype": "tcp", 00:24:05.023 "traddr": "10.0.0.2", 00:24:05.023 "adrfam": "ipv4", 00:24:05.023 "trsvcid": "4420", 00:24:05.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:05.023 "hdgst": false, 00:24:05.023 "ddgst": false 00:24:05.023 }, 00:24:05.023 "method": "bdev_nvme_attach_controller" 00:24:05.023 }' 00:24:05.023 [2024-11-20 09:17:43.582857] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:05.023 [2024-11-20 09:17:43.583453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101983 ] 00:24:05.023 [2024-11-20 09:17:43.731501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.023 [2024-11-20 09:17:43.795337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.282 Running I/O for 1 seconds... 00:24:06.237 1380.00 IOPS, 86.25 MiB/s 00:24:06.237 Latency(us) 00:24:06.237 [2024-11-20T09:17:45.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.237 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.237 Verification LBA range: start 0x0 length 0x400 00:24:06.237 Nvme0n1 : 1.04 1413.80 88.36 0.00 0.00 44383.99 6672.76 41466.41 00:24:06.237 [2024-11-20T09:17:45.156Z] =================================================================================================================== 00:24:06.237 [2024-11-20T09:17:45.156Z] Total : 1413.80 88.36 0.00 0.00 44383.99 6672.76 41466.41 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:06.495 rmmod nvme_tcp 00:24:06.495 rmmod nvme_fabrics 00:24:06.495 rmmod nvme_keyring 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 101878 ']' 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 101878 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 101878 ']' 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 101878 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101878 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.495 killing process with pid 101878 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101878' 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 101878 00:24:06.495 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 101878 00:24:06.754 [2024-11-20 09:17:45.645639] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:07.084 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # continue 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:07.085 00:24:07.085 real 0m5.594s 00:24:07.085 user 0m17.884s 00:24:07.085 sys 0m2.384s 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:07.085 ************************************ 00:24:07.085 END TEST nvmf_host_management 00:24:07.085 ************************************ 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:07.085 ************************************ 00:24:07.085 START TEST nvmf_lvol 00:24:07.085 ************************************ 00:24:07.085 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:07.346 * Looking for test storage... 00:24:07.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:07.346 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:07.346 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:07.346 09:17:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.346 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.347 --rc genhtml_branch_coverage=1 00:24:07.347 --rc genhtml_function_coverage=1 00:24:07.347 --rc genhtml_legend=1 00:24:07.347 --rc geninfo_all_blocks=1 00:24:07.347 --rc geninfo_unexecuted_blocks=1 00:24:07.347 00:24:07.347 ' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.347 --rc genhtml_branch_coverage=1 00:24:07.347 --rc genhtml_function_coverage=1 00:24:07.347 --rc genhtml_legend=1 00:24:07.347 --rc geninfo_all_blocks=1 00:24:07.347 --rc geninfo_unexecuted_blocks=1 00:24:07.347 00:24:07.347 ' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.347 --rc genhtml_branch_coverage=1 00:24:07.347 --rc genhtml_function_coverage=1 00:24:07.347 --rc genhtml_legend=1 00:24:07.347 --rc geninfo_all_blocks=1 00:24:07.347 --rc geninfo_unexecuted_blocks=1 00:24:07.347 00:24:07.347 ' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:07.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.347 --rc genhtml_branch_coverage=1 00:24:07.347 --rc genhtml_function_coverage=1 00:24:07.347 --rc genhtml_legend=1 00:24:07.347 --rc geninfo_all_blocks=1 00:24:07.347 --rc geninfo_unexecuted_blocks=1 00:24:07.347 00:24:07.347 ' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@223 -- # create_target_ns 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:07.347 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target0 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:07.348 10.0.0.1 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:07.348 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:07.608 10.0.0.2 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@151 -- # set_up target1 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772163 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:07.608 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:07.608 10.0.0.3 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772164 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:07.609 10.0.0.4 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:07.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:24:07.609 00:24:07.609 --- 10.0.0.1 ping statistics --- 00:24:07.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.609 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:07.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:24:07.609 00:24:07.609 --- 10.0.0.2 ping statistics --- 00:24:07.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.609 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:07.609 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:07.610 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:07.869 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:07.869 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:24:07.869 00:24:07.869 --- 10.0.0.3 ping statistics --- 00:24:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.869 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:07.869 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:07.869 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.112 ms 00:24:07.869 00:24:07.869 --- 10.0.0.4 ping statistics --- 00:24:07.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.869 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # return 0 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator0 00:24:07.869 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo initiator1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target0 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target0 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo target1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=target1 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=102242 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 102242 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 102242 ']' 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.870 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.871 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.871 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.871 09:17:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:07.871 [2024-11-20 09:17:46.717643] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:07.871 [2024-11-20 09:17:46.719107] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:07.871 [2024-11-20 09:17:46.719209] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.129 [2024-11-20 09:17:46.873139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:08.129 [2024-11-20 09:17:46.940498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.129 [2024-11-20 09:17:46.941368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.129 [2024-11-20 09:17:46.941550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.129 [2024-11-20 09:17:46.941616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.129 [2024-11-20 09:17:46.941652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.129 [2024-11-20 09:17:46.943034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.129 [2024-11-20 09:17:46.943169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.129 [2024-11-20 09:17:46.943344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.129 [2024-11-20 09:17:47.041835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:08.129 [2024-11-20 09:17:47.041974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:08.129 [2024-11-20 09:17:47.042360] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:08.129 [2024-11-20 09:17:47.042510] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:08.388 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.388 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:24:08.388 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:08.388 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.388 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:08.388 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.388 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:08.645 [2024-11-20 09:17:47.416127] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.645 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:08.902 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:24:08.902 09:17:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:09.470 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:24:09.470 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:24:09.728 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:24:09.986 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d36b76fb-55ad-43b4-92f9-4e6fee53f2ba 00:24:09.986 09:17:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d36b76fb-55ad-43b4-92f9-4e6fee53f2ba lvol 20 00:24:10.245 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=95a88af7-6a30-433c-8294-4e10af42fbb3 00:24:10.246 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:10.812 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 95a88af7-6a30-433c-8294-4e10af42fbb3 00:24:10.812 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.071 [2024-11-20 09:17:49.964276] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.071 09:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:11.638 09:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=102376 00:24:11.638 09:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:24:11.638 09:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:24:12.572 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 95a88af7-6a30-433c-8294-4e10af42fbb3 MY_SNAPSHOT 00:24:12.831 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e0860533-384e-4aa6-87c2-b34a30400e02 00:24:12.831 09:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 95a88af7-6a30-433c-8294-4e10af42fbb3 30 00:24:13.405 09:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e0860533-384e-4aa6-87c2-b34a30400e02 MY_CLONE 00:24:13.663 09:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=10aae3bc-8cdf-45b6-92d4-b88c8063e1b6 00:24:13.663 09:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 10aae3bc-8cdf-45b6-92d4-b88c8063e1b6 00:24:14.227 09:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 102376 00:24:22.336 Initializing NVMe Controllers 00:24:22.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:24:22.336 Controller IO queue size 128, less than required. 00:24:22.336 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:22.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:24:22.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:24:22.336 Initialization complete. Launching workers. 00:24:22.336 ======================================================== 00:24:22.336 Latency(us) 00:24:22.336 Device Information : IOPS MiB/s Average min max 00:24:22.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10246.30 40.02 12500.92 2385.86 85537.54 00:24:22.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9945.50 38.85 12872.47 4149.90 65214.91 00:24:22.336 ======================================================== 00:24:22.336 Total : 20191.79 78.87 12683.93 2385.86 85537.54 00:24:22.336 00:24:22.336 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:22.337 09:18:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 95a88af7-6a30-433c-8294-4e10af42fbb3 00:24:22.594 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d36b76fb-55ad-43b4-92f9-4e6fee53f2ba 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:22.853 rmmod nvme_tcp 00:24:22.853 rmmod nvme_fabrics 00:24:22.853 rmmod nvme_keyring 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 102242 ']' 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 102242 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 102242 ']' 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 102242 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102242 00:24:22.853 killing process with pid 102242 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102242' 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 102242 00:24:22.853 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 102242 00:24:23.112 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:23.112 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:24:23.112 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:24:23.112 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:23.112 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:23.112 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:23.112 09:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:23.112 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:23.112 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:23.112 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:24:23.112 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:24:23.112 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:23.112 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:24:23.112 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # continue 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:24:23.371 ************************************ 00:24:23.371 END TEST nvmf_lvol 00:24:23.371 ************************************ 00:24:23.371 00:24:23.371 real 0m16.227s 00:24:23.371 user 0m56.931s 00:24:23.371 sys 0m6.171s 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:23.371 ************************************ 00:24:23.371 START TEST nvmf_lvs_grow 00:24:23.371 ************************************ 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:23.371 * Looking for test storage... 00:24:23.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.371 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:23.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.632 --rc genhtml_branch_coverage=1 00:24:23.632 --rc genhtml_function_coverage=1 00:24:23.632 --rc genhtml_legend=1 00:24:23.632 --rc geninfo_all_blocks=1 00:24:23.632 --rc geninfo_unexecuted_blocks=1 00:24:23.632 00:24:23.632 ' 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:23.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.632 --rc genhtml_branch_coverage=1 00:24:23.632 --rc genhtml_function_coverage=1 00:24:23.632 --rc genhtml_legend=1 00:24:23.632 --rc geninfo_all_blocks=1 00:24:23.632 --rc geninfo_unexecuted_blocks=1 00:24:23.632 00:24:23.632 ' 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:23.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.632 --rc genhtml_branch_coverage=1 00:24:23.632 --rc genhtml_function_coverage=1 00:24:23.632 --rc genhtml_legend=1 00:24:23.632 --rc geninfo_all_blocks=1 00:24:23.632 --rc geninfo_unexecuted_blocks=1 00:24:23.632 00:24:23.632 ' 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:23.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.632 --rc genhtml_branch_coverage=1 00:24:23.632 --rc genhtml_function_coverage=1 00:24:23.632 --rc genhtml_legend=1 00:24:23.632 --rc geninfo_all_blocks=1 00:24:23.632 --rc geninfo_unexecuted_blocks=1 00:24:23.632 00:24:23.632 ' 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:24:23.632 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@280 -- # nvmf_veth_init 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@223 -- # create_target_ns 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # create_main_bridge 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@105 -- # delete_main_bridge 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:24:23.633 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator0 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target0 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0 up 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target0_br 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target0 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:23.634 10.0.0.1 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.634 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:24:23.635 10.0.0.2 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator0 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:24:23.635 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target0_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up initiator1 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@151 -- # set_up target1 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1 up 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # set_up target1_br 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns target1 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772163 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:24:23.895 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:24:23.895 10.0.0.3 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772164 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:24:23.896 10.0.0.4 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up initiator1 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@129 -- # set_up target1_br 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 2 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:23.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:23.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:24:23.896 00:24:23.896 --- 10.0.0.1 ping statistics --- 00:24:23.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.896 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:23.896 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:23.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:23.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:24:23.897 00:24:23.897 --- 10.0.0.2 ping statistics --- 00:24:23.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.897 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:24:23.897 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:23.897 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:24:23.897 00:24:23.897 --- 10.0.0.3 ping statistics --- 00:24:23.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.897 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:24:23.897 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:23.897 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:24:23.897 00:24:23.897 --- 10.0.0.4 ping statistics --- 00:24:23.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:23.897 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # return 0 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:23.897 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator0 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator0 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo initiator1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=initiator1 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:24:23.898 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:24.156 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target0 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target0 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo target1 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=target1 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=102792 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 102792 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 102792 ']' 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.157 09:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:24.157 [2024-11-20 09:18:02.937453] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:24.157 [2024-11-20 09:18:02.938802] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:24.157 [2024-11-20 09:18:02.938882] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.415 [2024-11-20 09:18:03.094074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.415 [2024-11-20 09:18:03.163813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.415 [2024-11-20 09:18:03.163883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.415 [2024-11-20 09:18:03.163898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.415 [2024-11-20 09:18:03.163909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.415 [2024-11-20 09:18:03.163918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.415 [2024-11-20 09:18:03.164412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.415 [2024-11-20 09:18:03.266293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:24.415 [2024-11-20 09:18:03.266670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:24.415 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:24.415 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:24:24.415 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:24.415 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:24.415 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:24.673 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.673 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:24.931 [2024-11-20 09:18:03.641387] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:24.931 ************************************ 00:24:24.931 START TEST lvs_grow_clean 00:24:24.931 ************************************ 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:24.931 09:18:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:25.189 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:25.190 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:25.447 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:25.447 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:25.447 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:25.705 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:25.705 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:25.705 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d1d36b51-97e7-49d8-81bd-7998718c9004 lvol 150 00:24:25.964 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5d29036c-028d-4baa-bdf4-24290ddd8d1c 00:24:25.964 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:25.964 09:18:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:26.223 [2024-11-20 09:18:05.109093] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:26.223 [2024-11-20 09:18:05.109315] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:26.223 true 00:24:26.223 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:26.223 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:26.791 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:26.791 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:27.050 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5d29036c-028d-4baa-bdf4-24290ddd8d1c 00:24:27.309 09:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:27.568 [2024-11-20 09:18:06.229657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.568 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102938 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102938 /var/tmp/bdevperf.sock 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 102938 ']' 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.828 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:27.828 [2024-11-20 09:18:06.576385] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:27.828 [2024-11-20 09:18:06.576519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102938 ] 00:24:27.828 [2024-11-20 09:18:06.727124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.088 [2024-11-20 09:18:06.800106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.088 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.088 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:24:28.088 09:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:28.363 Nvme0n1 00:24:28.363 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:28.933 [ 00:24:28.933 { 00:24:28.933 "aliases": [ 00:24:28.933 "5d29036c-028d-4baa-bdf4-24290ddd8d1c" 00:24:28.933 ], 00:24:28.933 "assigned_rate_limits": { 00:24:28.933 "r_mbytes_per_sec": 0, 00:24:28.933 "rw_ios_per_sec": 0, 00:24:28.933 "rw_mbytes_per_sec": 0, 00:24:28.933 "w_mbytes_per_sec": 0 00:24:28.933 }, 00:24:28.933 "block_size": 4096, 00:24:28.933 "claimed": false, 00:24:28.933 "driver_specific": { 00:24:28.933 "mp_policy": "active_passive", 00:24:28.933 "nvme": [ 00:24:28.933 { 00:24:28.933 "ctrlr_data": { 00:24:28.933 "ana_reporting": false, 00:24:28.933 "cntlid": 1, 00:24:28.933 "firmware_revision": "25.01", 00:24:28.933 "model_number": "SPDK bdev Controller", 00:24:28.933 "multi_ctrlr": true, 00:24:28.933 "oacs": { 00:24:28.933 "firmware": 0, 00:24:28.933 "format": 0, 00:24:28.933 "ns_manage": 0, 00:24:28.933 "security": 0 00:24:28.933 }, 00:24:28.933 "serial_number": "SPDK0", 00:24:28.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.933 "vendor_id": "0x8086" 00:24:28.933 }, 00:24:28.933 "ns_data": { 00:24:28.933 "can_share": true, 00:24:28.933 "id": 1 00:24:28.933 }, 00:24:28.933 "trid": { 00:24:28.933 "adrfam": "IPv4", 00:24:28.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:28.933 "traddr": "10.0.0.2", 00:24:28.933 "trsvcid": "4420", 00:24:28.933 "trtype": "TCP" 00:24:28.933 }, 00:24:28.933 "vs": { 00:24:28.933 "nvme_version": "1.3" 00:24:28.933 } 00:24:28.933 } 00:24:28.933 ] 00:24:28.933 }, 00:24:28.933 "memory_domains": [ 00:24:28.933 { 00:24:28.933 "dma_device_id": "system", 00:24:28.933 "dma_device_type": 1 00:24:28.933 } 00:24:28.933 ], 00:24:28.933 "name": "Nvme0n1", 00:24:28.933 "num_blocks": 38912, 00:24:28.933 "numa_id": -1, 00:24:28.933 "product_name": "NVMe disk", 00:24:28.933 "supported_io_types": { 00:24:28.933 "abort": true, 00:24:28.933 "compare": true, 00:24:28.933 "compare_and_write": true, 00:24:28.933 "copy": true, 00:24:28.933 "flush": true, 00:24:28.933 "get_zone_info": false, 00:24:28.933 "nvme_admin": true, 00:24:28.933 "nvme_io": true, 00:24:28.933 "nvme_io_md": false, 00:24:28.933 "nvme_iov_md": false, 00:24:28.933 "read": true, 00:24:28.933 "reset": true, 00:24:28.933 "seek_data": false, 00:24:28.933 "seek_hole": false, 00:24:28.933 "unmap": true, 00:24:28.933 "write": true, 00:24:28.933 "write_zeroes": true, 00:24:28.933 "zcopy": false, 00:24:28.933 "zone_append": false, 00:24:28.933 "zone_management": false 00:24:28.933 }, 00:24:28.933 "uuid": "5d29036c-028d-4baa-bdf4-24290ddd8d1c", 00:24:28.933 "zoned": false 00:24:28.933 } 00:24:28.933 ] 00:24:28.933 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102969 00:24:28.933 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:28.933 09:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:28.933 Running I/O for 10 seconds... 00:24:29.870 Latency(us) 00:24:29.870 [2024-11-20T09:18:08.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:29.870 Nvme0n1 : 1.00 6728.00 26.28 0.00 0.00 0.00 0.00 0.00 00:24:29.870 [2024-11-20T09:18:08.789Z] =================================================================================================================== 00:24:29.870 [2024-11-20T09:18:08.789Z] Total : 6728.00 26.28 0.00 0.00 0.00 0.00 0.00 00:24:29.870 00:24:30.806 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:30.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:30.807 Nvme0n1 : 2.00 7005.00 27.36 0.00 0.00 0.00 0.00 0.00 00:24:30.807 [2024-11-20T09:18:09.726Z] =================================================================================================================== 00:24:30.807 [2024-11-20T09:18:09.726Z] Total : 7005.00 27.36 0.00 0.00 0.00 0.00 0.00 00:24:30.807 00:24:31.065 true 00:24:31.065 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:31.065 09:18:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:31.323 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:31.323 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:31.323 09:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 102969 00:24:31.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:31.890 Nvme0n1 : 3.00 7211.33 28.17 0.00 0.00 0.00 0.00 0.00 00:24:31.890 [2024-11-20T09:18:10.809Z] =================================================================================================================== 00:24:31.890 [2024-11-20T09:18:10.809Z] Total : 7211.33 28.17 0.00 0.00 0.00 0.00 0.00 00:24:31.890 00:24:32.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:32.827 Nvme0n1 : 4.00 7221.75 28.21 0.00 0.00 0.00 0.00 0.00 00:24:32.827 [2024-11-20T09:18:11.746Z] =================================================================================================================== 00:24:32.827 [2024-11-20T09:18:11.746Z] Total : 7221.75 28.21 0.00 0.00 0.00 0.00 0.00 00:24:32.827 00:24:33.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:33.762 Nvme0n1 : 5.00 7233.80 28.26 0.00 0.00 0.00 0.00 0.00 00:24:33.762 [2024-11-20T09:18:12.681Z] =================================================================================================================== 00:24:33.762 [2024-11-20T09:18:12.681Z] Total : 7233.80 28.26 0.00 0.00 0.00 0.00 0.00 00:24:33.762 00:24:35.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:35.139 Nvme0n1 : 6.00 7248.67 28.32 0.00 0.00 0.00 0.00 0.00 00:24:35.139 [2024-11-20T09:18:14.059Z] =================================================================================================================== 00:24:35.140 [2024-11-20T09:18:14.059Z] Total : 7248.67 28.32 0.00 0.00 0.00 0.00 0.00 00:24:35.140 00:24:36.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:36.136 Nvme0n1 : 7.00 7265.43 28.38 0.00 0.00 0.00 0.00 0.00 00:24:36.136 [2024-11-20T09:18:15.055Z] =================================================================================================================== 00:24:36.136 [2024-11-20T09:18:15.055Z] Total : 7265.43 28.38 0.00 0.00 0.00 0.00 0.00 00:24:36.136 00:24:37.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:37.071 Nvme0n1 : 8.00 7277.62 28.43 0.00 0.00 0.00 0.00 0.00 00:24:37.071 [2024-11-20T09:18:15.990Z] =================================================================================================================== 00:24:37.071 [2024-11-20T09:18:15.990Z] Total : 7277.62 28.43 0.00 0.00 0.00 0.00 0.00 00:24:37.071 00:24:38.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:38.007 Nvme0n1 : 9.00 7259.22 28.36 0.00 0.00 0.00 0.00 0.00 00:24:38.007 [2024-11-20T09:18:16.926Z] =================================================================================================================== 00:24:38.007 [2024-11-20T09:18:16.926Z] Total : 7259.22 28.36 0.00 0.00 0.00 0.00 0.00 00:24:38.007 00:24:38.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:38.944 Nvme0n1 : 10.00 7191.00 28.09 0.00 0.00 0.00 0.00 0.00 00:24:38.944 [2024-11-20T09:18:17.863Z] =================================================================================================================== 00:24:38.944 [2024-11-20T09:18:17.863Z] Total : 7191.00 28.09 0.00 0.00 0.00 0.00 0.00 00:24:38.944 00:24:38.944 00:24:38.944 Latency(us) 00:24:38.944 [2024-11-20T09:18:17.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:38.944 Nvme0n1 : 10.01 7197.31 28.11 0.00 0.00 17777.67 8519.68 44087.85 00:24:38.944 [2024-11-20T09:18:17.863Z] =================================================================================================================== 00:24:38.944 [2024-11-20T09:18:17.863Z] Total : 7197.31 28.11 0.00 0.00 17777.67 8519.68 44087.85 00:24:38.944 { 00:24:38.944 "results": [ 00:24:38.944 { 00:24:38.944 "job": "Nvme0n1", 00:24:38.944 "core_mask": "0x2", 00:24:38.944 "workload": "randwrite", 00:24:38.944 "status": "finished", 00:24:38.944 "queue_depth": 128, 00:24:38.944 "io_size": 4096, 00:24:38.944 "runtime": 10.009011, 00:24:38.944 "iops": 7197.314499904136, 00:24:38.944 "mibps": 28.11450976525053, 00:24:38.944 "io_failed": 0, 00:24:38.944 "io_timeout": 0, 00:24:38.944 "avg_latency_us": 17777.666974753225, 00:24:38.944 "min_latency_us": 8519.68, 00:24:38.944 "max_latency_us": 44087.854545454546 00:24:38.944 } 00:24:38.944 ], 00:24:38.944 "core_count": 1 00:24:38.944 } 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102938 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 102938 ']' 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 102938 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102938 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:38.944 killing process with pid 102938 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102938' 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 102938 00:24:38.944 Received shutdown signal, test time was about 10.000000 seconds 00:24:38.944 00:24:38.944 Latency(us) 00:24:38.944 [2024-11-20T09:18:17.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.944 [2024-11-20T09:18:17.863Z] =================================================================================================================== 00:24:38.944 [2024-11-20T09:18:17.863Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:38.944 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 102938 00:24:39.203 09:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:39.461 09:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:39.720 09:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:39.720 09:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:40.287 09:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:40.287 09:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:24:40.287 09:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:40.544 [2024-11-20 09:18:19.237152] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:40.544 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:40.802 2024/11/20 09:18:19 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d1d36b51-97e7-49d8-81bd-7998718c9004], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:40.802 request: 00:24:40.802 { 00:24:40.802 "method": "bdev_lvol_get_lvstores", 00:24:40.802 "params": { 00:24:40.802 "uuid": "d1d36b51-97e7-49d8-81bd-7998718c9004" 00:24:40.802 } 00:24:40.802 } 00:24:40.802 Got JSON-RPC error response 00:24:40.802 GoRPCClient: error on JSON-RPC call 00:24:40.802 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:24:40.802 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:40.802 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:40.802 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:40.802 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:41.061 aio_bdev 00:24:41.061 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5d29036c-028d-4baa-bdf4-24290ddd8d1c 00:24:41.061 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5d29036c-028d-4baa-bdf4-24290ddd8d1c 00:24:41.061 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:41.061 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:24:41.061 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:41.061 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:41.061 09:18:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:41.629 09:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5d29036c-028d-4baa-bdf4-24290ddd8d1c -t 2000 00:24:41.630 [ 00:24:41.630 { 00:24:41.630 "aliases": [ 00:24:41.630 "lvs/lvol" 00:24:41.630 ], 00:24:41.630 "assigned_rate_limits": { 00:24:41.630 "r_mbytes_per_sec": 0, 00:24:41.630 "rw_ios_per_sec": 0, 00:24:41.630 "rw_mbytes_per_sec": 0, 00:24:41.630 "w_mbytes_per_sec": 0 00:24:41.630 }, 00:24:41.630 "block_size": 4096, 00:24:41.630 "claimed": false, 00:24:41.630 "driver_specific": { 00:24:41.630 "lvol": { 00:24:41.630 "base_bdev": "aio_bdev", 00:24:41.630 "clone": false, 00:24:41.630 "esnap_clone": false, 00:24:41.630 "lvol_store_uuid": "d1d36b51-97e7-49d8-81bd-7998718c9004", 00:24:41.630 "num_allocated_clusters": 38, 00:24:41.630 "snapshot": false, 00:24:41.630 "thin_provision": false 00:24:41.630 } 00:24:41.630 }, 00:24:41.630 "name": "5d29036c-028d-4baa-bdf4-24290ddd8d1c", 00:24:41.630 "num_blocks": 38912, 00:24:41.630 "product_name": "Logical Volume", 00:24:41.630 "supported_io_types": { 00:24:41.630 "abort": false, 00:24:41.630 "compare": false, 00:24:41.630 "compare_and_write": false, 00:24:41.630 "copy": false, 00:24:41.630 "flush": false, 00:24:41.630 "get_zone_info": false, 00:24:41.630 "nvme_admin": false, 00:24:41.630 "nvme_io": false, 00:24:41.630 "nvme_io_md": false, 00:24:41.630 "nvme_iov_md": false, 00:24:41.630 "read": true, 00:24:41.630 "reset": true, 00:24:41.630 "seek_data": true, 00:24:41.630 "seek_hole": true, 00:24:41.630 "unmap": true, 00:24:41.630 "write": true, 00:24:41.630 "write_zeroes": true, 00:24:41.630 "zcopy": false, 00:24:41.630 "zone_append": false, 00:24:41.630 "zone_management": false 00:24:41.630 }, 00:24:41.630 "uuid": "5d29036c-028d-4baa-bdf4-24290ddd8d1c", 00:24:41.630 "zoned": false 00:24:41.630 } 00:24:41.630 ] 00:24:41.630 09:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:24:41.630 09:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:41.630 09:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:42.199 09:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:42.199 09:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:42.199 09:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:42.459 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:42.459 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5d29036c-028d-4baa-bdf4-24290ddd8d1c 00:24:42.717 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d1d36b51-97e7-49d8-81bd-7998718c9004 00:24:42.976 09:18:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:43.542 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:43.801 00:24:43.801 real 0m18.983s 00:24:43.801 user 0m18.085s 00:24:43.801 sys 0m2.326s 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:43.801 ************************************ 00:24:43.801 END TEST lvs_grow_clean 00:24:43.801 ************************************ 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:43.801 ************************************ 00:24:43.801 START TEST lvs_grow_dirty 00:24:43.801 ************************************ 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:43.801 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:44.059 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:44.059 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:44.059 09:18:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:44.317 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:44.317 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:44.631 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=82022a7f-135d-4dbc-920c-7f40221da10c 00:24:44.631 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:44.631 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:24:44.921 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:44.921 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:44.921 09:18:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 82022a7f-135d-4dbc-920c-7f40221da10c lvol 150 00:24:45.180 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7b2ce4d2-8279-4d79-8f07-46a53c2023ab 00:24:45.180 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:45.180 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:45.748 [2024-11-20 09:18:24.397089] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:45.749 [2024-11-20 09:18:24.397238] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:45.749 true 00:24:45.749 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:24:45.749 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:46.008 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:46.008 09:18:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:46.266 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b2ce4d2-8279-4d79-8f07-46a53c2023ab 00:24:46.525 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:47.092 [2024-11-20 09:18:25.709515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.092 09:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=103370 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 103370 /var/tmp/bdevperf.sock 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103370 ']' 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.092 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:47.350 [2024-11-20 09:18:26.050881] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:47.350 [2024-11-20 09:18:26.050976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103370 ] 00:24:47.350 [2024-11-20 09:18:26.198792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.609 [2024-11-20 09:18:26.275429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.609 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.609 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:24:47.609 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:47.867 Nvme0n1 00:24:48.125 09:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:48.384 [ 00:24:48.384 { 00:24:48.384 "aliases": [ 00:24:48.384 "7b2ce4d2-8279-4d79-8f07-46a53c2023ab" 00:24:48.384 ], 00:24:48.384 "assigned_rate_limits": { 00:24:48.384 "r_mbytes_per_sec": 0, 00:24:48.384 "rw_ios_per_sec": 0, 00:24:48.384 "rw_mbytes_per_sec": 0, 00:24:48.384 "w_mbytes_per_sec": 0 00:24:48.384 }, 00:24:48.384 "block_size": 4096, 00:24:48.384 "claimed": false, 00:24:48.384 "driver_specific": { 00:24:48.384 "mp_policy": "active_passive", 00:24:48.384 "nvme": [ 00:24:48.384 { 00:24:48.384 "ctrlr_data": { 00:24:48.384 "ana_reporting": false, 00:24:48.384 "cntlid": 1, 00:24:48.384 "firmware_revision": "25.01", 00:24:48.384 "model_number": "SPDK bdev Controller", 00:24:48.384 "multi_ctrlr": true, 00:24:48.384 "oacs": { 00:24:48.384 "firmware": 0, 00:24:48.384 "format": 0, 00:24:48.384 "ns_manage": 0, 00:24:48.384 "security": 0 00:24:48.384 }, 00:24:48.384 "serial_number": "SPDK0", 00:24:48.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.384 "vendor_id": "0x8086" 00:24:48.384 }, 00:24:48.384 "ns_data": { 00:24:48.384 "can_share": true, 00:24:48.384 "id": 1 00:24:48.384 }, 00:24:48.384 "trid": { 00:24:48.384 "adrfam": "IPv4", 00:24:48.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:48.384 "traddr": "10.0.0.2", 00:24:48.384 "trsvcid": "4420", 00:24:48.384 "trtype": "TCP" 00:24:48.384 }, 00:24:48.384 "vs": { 00:24:48.384 "nvme_version": "1.3" 00:24:48.384 } 00:24:48.384 } 00:24:48.384 ] 00:24:48.384 }, 00:24:48.384 "memory_domains": [ 00:24:48.384 { 00:24:48.384 "dma_device_id": "system", 00:24:48.384 "dma_device_type": 1 00:24:48.384 } 00:24:48.384 ], 00:24:48.384 "name": "Nvme0n1", 00:24:48.384 "num_blocks": 38912, 00:24:48.384 "numa_id": -1, 00:24:48.384 "product_name": "NVMe disk", 00:24:48.384 "supported_io_types": { 00:24:48.384 "abort": true, 00:24:48.384 "compare": true, 00:24:48.384 "compare_and_write": true, 00:24:48.384 "copy": true, 00:24:48.384 "flush": true, 00:24:48.384 "get_zone_info": false, 00:24:48.384 "nvme_admin": true, 00:24:48.384 "nvme_io": true, 00:24:48.384 "nvme_io_md": false, 00:24:48.384 "nvme_iov_md": false, 00:24:48.384 "read": true, 00:24:48.384 "reset": true, 00:24:48.384 "seek_data": false, 00:24:48.384 "seek_hole": false, 00:24:48.384 "unmap": true, 00:24:48.384 "write": true, 00:24:48.384 "write_zeroes": true, 00:24:48.384 "zcopy": false, 00:24:48.384 "zone_append": false, 00:24:48.384 "zone_management": false 00:24:48.384 }, 00:24:48.384 "uuid": "7b2ce4d2-8279-4d79-8f07-46a53c2023ab", 00:24:48.384 "zoned": false 00:24:48.384 } 00:24:48.384 ] 00:24:48.384 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=103404 00:24:48.384 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.384 09:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:48.384 Running I/O for 10 seconds... 00:24:49.760 Latency(us) 00:24:49.760 [2024-11-20T09:18:28.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.760 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:49.760 Nvme0n1 : 1.00 7282.00 28.45 0.00 0.00 0.00 0.00 0.00 00:24:49.760 [2024-11-20T09:18:28.679Z] =================================================================================================================== 00:24:49.760 [2024-11-20T09:18:28.679Z] Total : 7282.00 28.45 0.00 0.00 0.00 0.00 0.00 00:24:49.760 00:24:50.326 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:24:50.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:50.584 Nvme0n1 : 2.00 7530.00 29.41 0.00 0.00 0.00 0.00 0.00 00:24:50.584 [2024-11-20T09:18:29.503Z] =================================================================================================================== 00:24:50.584 [2024-11-20T09:18:29.503Z] Total : 7530.00 29.41 0.00 0.00 0.00 0.00 0.00 00:24:50.584 00:24:50.842 true 00:24:50.842 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:24:50.842 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:51.099 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:51.099 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:51.099 09:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 103404 00:24:51.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:51.666 Nvme0n1 : 3.00 7625.33 29.79 0.00 0.00 0.00 0.00 0.00 00:24:51.666 [2024-11-20T09:18:30.585Z] =================================================================================================================== 00:24:51.666 [2024-11-20T09:18:30.585Z] Total : 7625.33 29.79 0.00 0.00 0.00 0.00 0.00 00:24:51.666 00:24:52.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:52.600 Nvme0n1 : 4.00 7613.75 29.74 0.00 0.00 0.00 0.00 0.00 00:24:52.600 [2024-11-20T09:18:31.519Z] =================================================================================================================== 00:24:52.600 [2024-11-20T09:18:31.519Z] Total : 7613.75 29.74 0.00 0.00 0.00 0.00 0.00 00:24:52.600 00:24:53.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:53.536 Nvme0n1 : 5.00 7621.00 29.77 0.00 0.00 0.00 0.00 0.00 00:24:53.536 [2024-11-20T09:18:32.455Z] =================================================================================================================== 00:24:53.536 [2024-11-20T09:18:32.455Z] Total : 7621.00 29.77 0.00 0.00 0.00 0.00 0.00 00:24:53.536 00:24:54.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:54.472 Nvme0n1 : 6.00 7437.17 29.05 0.00 0.00 0.00 0.00 0.00 00:24:54.472 [2024-11-20T09:18:33.391Z] =================================================================================================================== 00:24:54.472 [2024-11-20T09:18:33.391Z] Total : 7437.17 29.05 0.00 0.00 0.00 0.00 0.00 00:24:54.472 00:24:55.408 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:55.408 Nvme0n1 : 7.00 7401.00 28.91 0.00 0.00 0.00 0.00 0.00 00:24:55.408 [2024-11-20T09:18:34.327Z] =================================================================================================================== 00:24:55.408 [2024-11-20T09:18:34.327Z] Total : 7401.00 28.91 0.00 0.00 0.00 0.00 0.00 00:24:55.408 00:24:56.784 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:56.784 Nvme0n1 : 8.00 7374.00 28.80 0.00 0.00 0.00 0.00 0.00 00:24:56.784 [2024-11-20T09:18:35.703Z] =================================================================================================================== 00:24:56.784 [2024-11-20T09:18:35.703Z] Total : 7374.00 28.80 0.00 0.00 0.00 0.00 0.00 00:24:56.784 00:24:57.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:57.735 Nvme0n1 : 9.00 7368.33 28.78 0.00 0.00 0.00 0.00 0.00 00:24:57.735 [2024-11-20T09:18:36.654Z] =================================================================================================================== 00:24:57.735 [2024-11-20T09:18:36.654Z] Total : 7368.33 28.78 0.00 0.00 0.00 0.00 0.00 00:24:57.735 00:24:58.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:58.671 Nvme0n1 : 10.00 7356.40 28.74 0.00 0.00 0.00 0.00 0.00 00:24:58.671 [2024-11-20T09:18:37.590Z] =================================================================================================================== 00:24:58.671 [2024-11-20T09:18:37.590Z] Total : 7356.40 28.74 0.00 0.00 0.00 0.00 0.00 00:24:58.671 00:24:58.671 00:24:58.671 Latency(us) 00:24:58.671 [2024-11-20T09:18:37.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:58.671 Nvme0n1 : 10.01 7362.99 28.76 0.00 0.00 17378.51 7506.85 144894.14 00:24:58.671 [2024-11-20T09:18:37.590Z] =================================================================================================================== 00:24:58.671 [2024-11-20T09:18:37.590Z] Total : 7362.99 28.76 0.00 0.00 17378.51 7506.85 144894.14 00:24:58.671 { 00:24:58.671 "results": [ 00:24:58.671 { 00:24:58.671 "job": "Nvme0n1", 00:24:58.671 "core_mask": "0x2", 00:24:58.671 "workload": "randwrite", 00:24:58.671 "status": "finished", 00:24:58.671 "queue_depth": 128, 00:24:58.671 "io_size": 4096, 00:24:58.671 "runtime": 10.008428, 00:24:58.671 "iops": 7362.994468262149, 00:24:58.671 "mibps": 28.76169714164902, 00:24:58.671 "io_failed": 0, 00:24:58.671 "io_timeout": 0, 00:24:58.671 "avg_latency_us": 17378.50721538788, 00:24:58.671 "min_latency_us": 7506.850909090909, 00:24:58.671 "max_latency_us": 144894.13818181818 00:24:58.671 } 00:24:58.671 ], 00:24:58.671 "core_count": 1 00:24:58.671 } 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 103370 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 103370 ']' 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 103370 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103370 00:24:58.671 killing process with pid 103370 00:24:58.671 Received shutdown signal, test time was about 10.000000 seconds 00:24:58.671 00:24:58.671 Latency(us) 00:24:58.671 [2024-11-20T09:18:37.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.671 [2024-11-20T09:18:37.590Z] =================================================================================================================== 00:24:58.671 [2024-11-20T09:18:37.590Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.671 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103370' 00:24:58.672 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 103370 00:24:58.672 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 103370 00:24:58.672 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:59.239 09:18:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:59.498 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:24:59.498 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 102792 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 102792 00:24:59.757 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 102792 Killed "${NVMF_APP[@]}" "$@" 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=103562 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 103562 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 103562 ']' 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.757 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:59.757 [2024-11-20 09:18:38.618351] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:59.757 [2024-11-20 09:18:38.619407] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:59.757 [2024-11-20 09:18:38.619596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.016 [2024-11-20 09:18:38.767131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.016 [2024-11-20 09:18:38.827380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.016 [2024-11-20 09:18:38.827438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.016 [2024-11-20 09:18:38.827450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.016 [2024-11-20 09:18:38.827459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.016 [2024-11-20 09:18:38.827467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.016 [2024-11-20 09:18:38.827871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.016 [2024-11-20 09:18:38.925630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:00.016 [2024-11-20 09:18:38.926027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:00.275 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.275 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:25:00.275 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:00.275 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.275 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:00.275 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.275 09:18:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:00.532 [2024-11-20 09:18:39.258226] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:00.532 [2024-11-20 09:18:39.260923] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:00.532 [2024-11-20 09:18:39.261356] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7b2ce4d2-8279-4d79-8f07-46a53c2023ab 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7b2ce4d2-8279-4d79-8f07-46a53c2023ab 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:00.532 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:00.790 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b2ce4d2-8279-4d79-8f07-46a53c2023ab -t 2000 00:25:01.049 [ 00:25:01.049 { 00:25:01.049 "aliases": [ 00:25:01.049 "lvs/lvol" 00:25:01.049 ], 00:25:01.049 "assigned_rate_limits": { 00:25:01.049 "r_mbytes_per_sec": 0, 00:25:01.049 "rw_ios_per_sec": 0, 00:25:01.049 "rw_mbytes_per_sec": 0, 00:25:01.049 "w_mbytes_per_sec": 0 00:25:01.049 }, 00:25:01.049 "block_size": 4096, 00:25:01.049 "claimed": false, 00:25:01.049 "driver_specific": { 00:25:01.049 "lvol": { 00:25:01.049 "base_bdev": "aio_bdev", 00:25:01.049 "clone": false, 00:25:01.049 "esnap_clone": false, 00:25:01.049 "lvol_store_uuid": "82022a7f-135d-4dbc-920c-7f40221da10c", 00:25:01.049 "num_allocated_clusters": 38, 00:25:01.049 "snapshot": false, 00:25:01.049 "thin_provision": false 00:25:01.049 } 00:25:01.049 }, 00:25:01.049 "name": "7b2ce4d2-8279-4d79-8f07-46a53c2023ab", 00:25:01.049 "num_blocks": 38912, 00:25:01.049 "product_name": "Logical Volume", 00:25:01.049 "supported_io_types": { 00:25:01.049 "abort": false, 00:25:01.049 "compare": false, 00:25:01.049 "compare_and_write": false, 00:25:01.049 "copy": false, 00:25:01.049 "flush": false, 00:25:01.049 "get_zone_info": false, 00:25:01.049 "nvme_admin": false, 00:25:01.049 "nvme_io": false, 00:25:01.049 "nvme_io_md": false, 00:25:01.049 "nvme_iov_md": false, 00:25:01.049 "read": true, 00:25:01.049 "reset": true, 00:25:01.049 "seek_data": true, 00:25:01.049 "seek_hole": true, 00:25:01.049 "unmap": true, 00:25:01.049 "write": true, 00:25:01.049 "write_zeroes": true, 00:25:01.049 "zcopy": false, 00:25:01.049 "zone_append": false, 00:25:01.049 "zone_management": false 00:25:01.049 }, 00:25:01.049 "uuid": "7b2ce4d2-8279-4d79-8f07-46a53c2023ab", 00:25:01.049 "zoned": false 00:25:01.049 } 00:25:01.049 ] 00:25:01.049 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:01.049 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:01.049 09:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:25:01.616 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:25:01.616 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:01.616 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:25:01.874 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:25:01.874 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:02.132 [2024-11-20 09:18:40.912483] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:02.132 09:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:02.391 2024/11/20 09:18:41 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:82022a7f-135d-4dbc-920c-7f40221da10c], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:25:02.391 request: 00:25:02.391 { 00:25:02.391 "method": "bdev_lvol_get_lvstores", 00:25:02.391 "params": { 00:25:02.391 "uuid": "82022a7f-135d-4dbc-920c-7f40221da10c" 00:25:02.391 } 00:25:02.391 } 00:25:02.391 Got JSON-RPC error response 00:25:02.391 GoRPCClient: error on JSON-RPC call 00:25:02.391 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:25:02.391 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:02.391 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:02.391 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:02.391 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:25:02.958 aio_bdev 00:25:02.958 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7b2ce4d2-8279-4d79-8f07-46a53c2023ab 00:25:02.958 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7b2ce4d2-8279-4d79-8f07-46a53c2023ab 00:25:02.958 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:25:02.958 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:25:02.958 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:25:02.958 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:25:02.958 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:03.217 09:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b2ce4d2-8279-4d79-8f07-46a53c2023ab -t 2000 00:25:03.476 [ 00:25:03.476 { 00:25:03.476 "aliases": [ 00:25:03.476 "lvs/lvol" 00:25:03.476 ], 00:25:03.476 "assigned_rate_limits": { 00:25:03.476 "r_mbytes_per_sec": 0, 00:25:03.476 "rw_ios_per_sec": 0, 00:25:03.476 "rw_mbytes_per_sec": 0, 00:25:03.476 "w_mbytes_per_sec": 0 00:25:03.476 }, 00:25:03.476 "block_size": 4096, 00:25:03.476 "claimed": false, 00:25:03.476 "driver_specific": { 00:25:03.476 "lvol": { 00:25:03.476 "base_bdev": "aio_bdev", 00:25:03.476 "clone": false, 00:25:03.476 "esnap_clone": false, 00:25:03.476 "lvol_store_uuid": "82022a7f-135d-4dbc-920c-7f40221da10c", 00:25:03.476 "num_allocated_clusters": 38, 00:25:03.476 "snapshot": false, 00:25:03.476 "thin_provision": false 00:25:03.476 } 00:25:03.476 }, 00:25:03.476 "name": "7b2ce4d2-8279-4d79-8f07-46a53c2023ab", 00:25:03.476 "num_blocks": 38912, 00:25:03.476 "product_name": "Logical Volume", 00:25:03.476 "supported_io_types": { 00:25:03.476 "abort": false, 00:25:03.476 "compare": false, 00:25:03.476 "compare_and_write": false, 00:25:03.476 "copy": false, 00:25:03.476 "flush": false, 00:25:03.476 "get_zone_info": false, 00:25:03.476 "nvme_admin": false, 00:25:03.476 "nvme_io": false, 00:25:03.476 "nvme_io_md": false, 00:25:03.476 "nvme_iov_md": false, 00:25:03.476 "read": true, 00:25:03.476 "reset": true, 00:25:03.476 "seek_data": true, 00:25:03.476 "seek_hole": true, 00:25:03.476 "unmap": true, 00:25:03.476 "write": true, 00:25:03.476 "write_zeroes": true, 00:25:03.476 "zcopy": false, 00:25:03.476 "zone_append": false, 00:25:03.476 "zone_management": false 00:25:03.476 }, 00:25:03.476 "uuid": "7b2ce4d2-8279-4d79-8f07-46a53c2023ab", 00:25:03.476 "zoned": false 00:25:03.476 } 00:25:03.476 ] 00:25:03.476 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:03.476 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:03.476 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:03.735 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:03.735 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:03.735 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:04.302 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:04.302 09:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7b2ce4d2-8279-4d79-8f07-46a53c2023ab 00:25:04.561 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82022a7f-135d-4dbc-920c-7f40221da10c 00:25:04.819 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:05.078 09:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:05.646 00:25:05.646 real 0m21.575s 00:25:05.646 user 0m29.492s 00:25:05.646 sys 0m8.373s 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:05.646 ************************************ 00:25:05.646 END TEST lvs_grow_dirty 00:25:05.646 ************************************ 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:05.646 nvmf_trace.0 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:05.646 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:06.213 rmmod nvme_tcp 00:25:06.213 rmmod nvme_fabrics 00:25:06.213 rmmod nvme_keyring 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 103562 ']' 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 103562 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 103562 ']' 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 103562 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103562 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.213 killing process with pid 103562 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103562' 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 103562 00:25:06.213 09:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 103562 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # continue 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:25:06.472 00:25:06.472 real 0m43.184s 00:25:06.472 user 0m48.904s 00:25:06.472 sys 0m11.915s 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.472 ************************************ 00:25:06.472 END TEST nvmf_lvs_grow 00:25:06.472 ************************************ 00:25:06.472 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:06.732 ************************************ 00:25:06.732 START TEST nvmf_bdev_io_wait 00:25:06.732 ************************************ 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:06.732 * Looking for test storage... 00:25:06.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:06.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.732 --rc genhtml_branch_coverage=1 00:25:06.732 --rc genhtml_function_coverage=1 00:25:06.732 --rc genhtml_legend=1 00:25:06.732 --rc geninfo_all_blocks=1 00:25:06.732 --rc geninfo_unexecuted_blocks=1 00:25:06.732 00:25:06.732 ' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:06.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.732 --rc genhtml_branch_coverage=1 00:25:06.732 --rc genhtml_function_coverage=1 00:25:06.732 --rc genhtml_legend=1 00:25:06.732 --rc geninfo_all_blocks=1 00:25:06.732 --rc geninfo_unexecuted_blocks=1 00:25:06.732 00:25:06.732 ' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:06.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.732 --rc genhtml_branch_coverage=1 00:25:06.732 --rc genhtml_function_coverage=1 00:25:06.732 --rc genhtml_legend=1 00:25:06.732 --rc geninfo_all_blocks=1 00:25:06.732 --rc geninfo_unexecuted_blocks=1 00:25:06.732 00:25:06.732 ' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:06.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.732 --rc genhtml_branch_coverage=1 00:25:06.732 --rc genhtml_function_coverage=1 00:25:06.732 --rc genhtml_legend=1 00:25:06.732 --rc geninfo_all_blocks=1 00:25:06.732 --rc geninfo_unexecuted_blocks=1 00:25:06.732 00:25:06.732 ' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.732 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.992 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.992 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.992 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.992 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.992 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.992 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@223 -- # create_target_ns 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target0 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:06.993 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:06.994 10.0.0.1 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:06.994 10.0.0.2 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:06.994 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@151 -- # set_up target1 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772163 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:06.995 10.0.0.3 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772164 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:06.995 10.0.0.4 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:06.995 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:07.255 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:07.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:25:07.256 00:25:07.256 --- 10.0.0.1 ping statistics --- 00:25:07.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.256 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:07.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:25:07.256 00:25:07.256 --- 10.0.0.2 ping statistics --- 00:25:07.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.256 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:07.256 09:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:07.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:07.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:25:07.256 00:25:07.256 --- 10.0.0.3 ping statistics --- 00:25:07.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.256 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:07.256 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:07.256 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:25:07.256 00:25:07.256 --- 10.0.0.4 ping statistics --- 00:25:07.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.256 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # return 0 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator0 00:25:07.256 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo initiator1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target0 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target0 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo target1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=target1 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=104027 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 104027 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 104027 ']' 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.257 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.517 [2024-11-20 09:18:46.199065] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:07.517 [2024-11-20 09:18:46.200322] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:07.517 [2024-11-20 09:18:46.200407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.517 [2024-11-20 09:18:46.353510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:07.517 [2024-11-20 09:18:46.416810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.517 [2024-11-20 09:18:46.416890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.517 [2024-11-20 09:18:46.416913] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.517 [2024-11-20 09:18:46.416924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.517 [2024-11-20 09:18:46.416934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.517 [2024-11-20 09:18:46.418237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.517 [2024-11-20 09:18:46.418371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.517 [2024-11-20 09:18:46.421799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.517 [2024-11-20 09:18:46.421835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.517 [2024-11-20 09:18:46.422343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 [2024-11-20 09:18:46.595531] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:07.777 [2024-11-20 09:18:46.595743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:07.777 [2024-11-20 09:18:46.597142] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:07.777 [2024-11-20 09:18:46.597267] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 [2024-11-20 09:18:46.606662] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 Malloc0 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:07.777 [2024-11-20 09:18:46.674956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=104062 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=104064 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:07.777 { 00:25:07.777 "params": { 00:25:07.777 "name": "Nvme$subsystem", 00:25:07.777 "trtype": "$TEST_TRANSPORT", 00:25:07.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.777 "adrfam": "ipv4", 00:25:07.777 "trsvcid": "$NVMF_PORT", 00:25:07.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.777 "hdgst": ${hdgst:-false}, 00:25:07.777 "ddgst": ${ddgst:-false} 00:25:07.777 }, 00:25:07.777 "method": "bdev_nvme_attach_controller" 00:25:07.777 } 00:25:07.777 EOF 00:25:07.777 )") 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=104066 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:07.777 { 00:25:07.777 "params": { 00:25:07.777 "name": "Nvme$subsystem", 00:25:07.777 "trtype": "$TEST_TRANSPORT", 00:25:07.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.777 "adrfam": "ipv4", 00:25:07.777 "trsvcid": "$NVMF_PORT", 00:25:07.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.777 "hdgst": ${hdgst:-false}, 00:25:07.777 "ddgst": ${ddgst:-false} 00:25:07.777 }, 00:25:07.777 "method": "bdev_nvme_attach_controller" 00:25:07.777 } 00:25:07.777 EOF 00:25:07.777 )") 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=104069 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:25:07.777 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:07.777 { 00:25:07.777 "params": { 00:25:07.777 "name": "Nvme$subsystem", 00:25:07.778 "trtype": "$TEST_TRANSPORT", 00:25:07.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.778 "adrfam": "ipv4", 00:25:07.778 "trsvcid": "$NVMF_PORT", 00:25:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.778 "hdgst": ${hdgst:-false}, 00:25:07.778 "ddgst": ${ddgst:-false} 00:25:07.778 }, 00:25:07.778 "method": "bdev_nvme_attach_controller" 00:25:07.778 } 00:25:07.778 EOF 00:25:07.778 )") 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:07.778 "params": { 00:25:07.778 "name": "Nvme1", 00:25:07.778 "trtype": "tcp", 00:25:07.778 "traddr": "10.0.0.2", 00:25:07.778 "adrfam": "ipv4", 00:25:07.778 "trsvcid": "4420", 00:25:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.778 "hdgst": false, 00:25:07.778 "ddgst": false 00:25:07.778 }, 00:25:07.778 "method": "bdev_nvme_attach_controller" 00:25:07.778 }' 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:07.778 { 00:25:07.778 "params": { 00:25:07.778 "name": "Nvme$subsystem", 00:25:07.778 "trtype": "$TEST_TRANSPORT", 00:25:07.778 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.778 "adrfam": "ipv4", 00:25:07.778 "trsvcid": "$NVMF_PORT", 00:25:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.778 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.778 "hdgst": ${hdgst:-false}, 00:25:07.778 "ddgst": ${ddgst:-false} 00:25:07.778 }, 00:25:07.778 "method": "bdev_nvme_attach_controller" 00:25:07.778 } 00:25:07.778 EOF 00:25:07.778 )") 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:07.778 "params": { 00:25:07.778 "name": "Nvme1", 00:25:07.778 "trtype": "tcp", 00:25:07.778 "traddr": "10.0.0.2", 00:25:07.778 "adrfam": "ipv4", 00:25:07.778 "trsvcid": "4420", 00:25:07.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.778 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.778 "hdgst": false, 00:25:07.778 "ddgst": false 00:25:07.778 }, 00:25:07.778 "method": "bdev_nvme_attach_controller" 00:25:07.778 }' 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:25:07.778 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:25:08.037 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:25:08.037 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:25:08.037 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:25:08.037 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:08.037 "params": { 00:25:08.037 "name": "Nvme1", 00:25:08.037 "trtype": "tcp", 00:25:08.037 "traddr": "10.0.0.2", 00:25:08.037 "adrfam": "ipv4", 00:25:08.037 "trsvcid": "4420", 00:25:08.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.037 "hdgst": false, 00:25:08.037 "ddgst": false 00:25:08.037 }, 00:25:08.037 "method": "bdev_nvme_attach_controller" 00:25:08.037 }' 00:25:08.037 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:25:08.037 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:08.037 "params": { 00:25:08.037 "name": "Nvme1", 00:25:08.037 "trtype": "tcp", 00:25:08.037 "traddr": "10.0.0.2", 00:25:08.037 "adrfam": "ipv4", 00:25:08.037 "trsvcid": "4420", 00:25:08.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.037 "hdgst": false, 00:25:08.037 "ddgst": false 00:25:08.037 }, 00:25:08.037 "method": "bdev_nvme_attach_controller" 00:25:08.037 }' 00:25:08.037 [2024-11-20 09:18:46.741365] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:08.037 [2024-11-20 09:18:46.741372] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:08.037 [2024-11-20 09:18:46.741464] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 09:18:46.741465] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --fil.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:25:08.037 e-prefix=spdk1 --proc-type=auto ] 00:25:08.037 [2024-11-20 09:18:46.744126] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:08.037 [2024-11-20 09:18:46.744207] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:25:08.037 09:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 104062 00:25:08.037 [2024-11-20 09:18:46.767171] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:08.037 [2024-11-20 09:18:46.767253] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:25:08.296 [2024-11-20 09:18:46.967563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.296 [2024-11-20 09:18:47.030595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:08.296 [2024-11-20 09:18:47.043215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.296 [2024-11-20 09:18:47.098926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:08.296 [2024-11-20 09:18:47.114411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.296 [2024-11-20 09:18:47.171000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:08.296 Running I/O for 1 seconds... 00:25:08.296 [2024-11-20 09:18:47.187979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.555 Running I/O for 1 seconds... 00:25:08.555 [2024-11-20 09:18:47.243895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:08.555 Running I/O for 1 seconds... 00:25:08.555 Running I/O for 1 seconds... 00:25:09.491 10463.00 IOPS, 40.87 MiB/s 00:25:09.491 Latency(us) 00:25:09.491 [2024-11-20T09:18:48.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.491 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:25:09.491 Nvme1n1 : 1.01 10533.15 41.15 0.00 0.00 12107.37 5481.19 23592.96 00:25:09.491 [2024-11-20T09:18:48.410Z] =================================================================================================================== 00:25:09.491 [2024-11-20T09:18:48.410Z] Total : 10533.15 41.15 0.00 0.00 12107.37 5481.19 23592.96 00:25:09.491 5032.00 IOPS, 19.66 MiB/s 00:25:09.491 Latency(us) 00:25:09.491 [2024-11-20T09:18:48.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.491 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:25:09.491 Nvme1n1 : 1.02 5047.74 19.72 0.00 0.00 24997.22 10843.23 35985.22 00:25:09.491 [2024-11-20T09:18:48.410Z] =================================================================================================================== 00:25:09.491 [2024-11-20T09:18:48.410Z] Total : 5047.74 19.72 0.00 0.00 24997.22 10843.23 35985.22 00:25:09.491 4989.00 IOPS, 19.49 MiB/s 00:25:09.491 Latency(us) 00:25:09.491 [2024-11-20T09:18:48.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.491 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:25:09.491 Nvme1n1 : 1.01 5075.80 19.83 0.00 0.00 25095.69 7566.43 44326.17 00:25:09.491 [2024-11-20T09:18:48.410Z] =================================================================================================================== 00:25:09.491 [2024-11-20T09:18:48.410Z] Total : 5075.80 19.83 0.00 0.00 25095.69 7566.43 44326.17 00:25:09.491 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 104064 00:25:09.491 196408.00 IOPS, 767.22 MiB/s 00:25:09.491 Latency(us) 00:25:09.491 [2024-11-20T09:18:48.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.491 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:25:09.491 Nvme1n1 : 1.00 196026.30 765.73 0.00 0.00 649.39 296.03 1921.40 00:25:09.491 [2024-11-20T09:18:48.410Z] =================================================================================================================== 00:25:09.491 [2024-11-20T09:18:48.410Z] Total : 196026.30 765.73 0.00 0.00 649.39 296.03 1921.40 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 104066 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 104069 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:09.750 rmmod nvme_tcp 00:25:09.750 rmmod nvme_fabrics 00:25:09.750 rmmod nvme_keyring 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 104027 ']' 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 104027 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 104027 ']' 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 104027 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104027 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.750 killing process with pid 104027 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104027' 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 104027 00:25:09.750 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 104027 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:10.008 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:10.267 09:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # continue 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:25:10.267 00:25:10.267 real 0m3.589s 00:25:10.267 user 0m12.886s 00:25:10.267 sys 0m2.479s 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:10.267 ************************************ 00:25:10.267 END TEST nvmf_bdev_io_wait 00:25:10.267 ************************************ 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:10.267 ************************************ 00:25:10.267 START TEST nvmf_queue_depth 00:25:10.267 ************************************ 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:10.267 * Looking for test storage... 00:25:10.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:10.267 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.268 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.268 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.528 --rc genhtml_branch_coverage=1 00:25:10.528 --rc genhtml_function_coverage=1 00:25:10.528 --rc genhtml_legend=1 00:25:10.528 --rc geninfo_all_blocks=1 00:25:10.528 --rc geninfo_unexecuted_blocks=1 00:25:10.528 00:25:10.528 ' 00:25:10.528 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.528 --rc genhtml_branch_coverage=1 00:25:10.528 --rc genhtml_function_coverage=1 00:25:10.528 --rc genhtml_legend=1 00:25:10.528 --rc geninfo_all_blocks=1 00:25:10.528 --rc geninfo_unexecuted_blocks=1 00:25:10.528 00:25:10.529 ' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.529 --rc genhtml_branch_coverage=1 00:25:10.529 --rc genhtml_function_coverage=1 00:25:10.529 --rc genhtml_legend=1 00:25:10.529 --rc geninfo_all_blocks=1 00:25:10.529 --rc geninfo_unexecuted_blocks=1 00:25:10.529 00:25:10.529 ' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.529 --rc genhtml_branch_coverage=1 00:25:10.529 --rc genhtml_function_coverage=1 00:25:10.529 --rc genhtml_legend=1 00:25:10.529 --rc geninfo_all_blocks=1 00:25:10.529 --rc geninfo_unexecuted_blocks=1 00:25:10.529 00:25:10.529 ' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@223 -- # create_target_ns 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:10.529 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target0 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:10.530 10.0.0.1 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:10.530 10.0.0.2 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:10.530 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@151 -- # set_up target1 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:10.531 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772163 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:10.802 10.0.0.3 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772164 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:10.802 10.0.0.4 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:10.802 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:10.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:25:10.803 00:25:10.803 --- 10.0.0.1 ping statistics --- 00:25:10.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.803 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:10.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:25:10.803 00:25:10.803 --- 10.0.0.2 ping statistics --- 00:25:10.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.803 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:10.803 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:10.803 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:25:10.803 00:25:10.803 --- 10.0.0.3 ping statistics --- 00:25:10.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.803 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:10.803 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:10.804 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:10.804 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:25:10.804 00:25:10.804 --- 10.0.0.4 ping statistics --- 00:25:10.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.804 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # return 0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo initiator1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target0 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo target1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=target1 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.804 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=104329 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 104329 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 104329 ']' 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.101 09:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:11.101 [2024-11-20 09:18:49.761663] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:11.101 [2024-11-20 09:18:49.762690] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:11.101 [2024-11-20 09:18:49.762753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.101 [2024-11-20 09:18:49.909578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.101 [2024-11-20 09:18:49.963452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.101 [2024-11-20 09:18:49.963513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.101 [2024-11-20 09:18:49.963541] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:11.101 [2024-11-20 09:18:49.963549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:11.101 [2024-11-20 09:18:49.963557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.101 [2024-11-20 09:18:49.963932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.360 [2024-11-20 09:18:50.065269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:11.360 [2024-11-20 09:18:50.065675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.927 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:12.186 [2024-11-20 09:18:50.844717] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:12.186 Malloc0 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:12.186 [2024-11-20 09:18:50.900812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=104382 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 104382 /var/tmp/bdevperf.sock 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 104382 ']' 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.186 09:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:12.186 [2024-11-20 09:18:50.965863] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:12.186 [2024-11-20 09:18:50.965953] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104382 ] 00:25:12.445 [2024-11-20 09:18:51.117436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.446 [2024-11-20 09:18:51.181060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.446 09:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:12.446 09:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:12.446 09:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.446 09:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.446 09:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:12.704 NVMe0n1 00:25:12.704 09:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.704 09:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:12.704 Running I/O for 10 seconds... 00:25:15.016 7452.00 IOPS, 29.11 MiB/s [2024-11-20T09:18:54.872Z] 7779.00 IOPS, 30.39 MiB/s [2024-11-20T09:18:55.808Z] 8094.67 IOPS, 31.62 MiB/s [2024-11-20T09:18:56.744Z] 8193.75 IOPS, 32.01 MiB/s [2024-11-20T09:18:57.676Z] 8283.80 IOPS, 32.36 MiB/s [2024-11-20T09:18:58.612Z] 8365.50 IOPS, 32.68 MiB/s [2024-11-20T09:18:59.571Z] 8388.71 IOPS, 32.77 MiB/s [2024-11-20T09:19:00.947Z] 8444.50 IOPS, 32.99 MiB/s [2024-11-20T09:19:01.884Z] 8436.67 IOPS, 32.96 MiB/s [2024-11-20T09:19:01.884Z] 8494.40 IOPS, 33.18 MiB/s 00:25:22.965 Latency(us) 00:25:22.965 [2024-11-20T09:19:01.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.965 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:25:22.965 Verification LBA range: start 0x0 length 0x4000 00:25:22.965 NVMe0n1 : 10.09 8514.59 33.26 0.00 0.00 119706.78 29312.47 115343.36 00:25:22.965 [2024-11-20T09:19:01.884Z] =================================================================================================================== 00:25:22.965 [2024-11-20T09:19:01.884Z] Total : 8514.59 33.26 0.00 0.00 119706.78 29312.47 115343.36 00:25:22.965 { 00:25:22.965 "results": [ 00:25:22.965 { 00:25:22.965 "job": "NVMe0n1", 00:25:22.965 "core_mask": "0x1", 00:25:22.965 "workload": "verify", 00:25:22.965 "status": "finished", 00:25:22.965 "verify_range": { 00:25:22.965 "start": 0, 00:25:22.965 "length": 16384 00:25:22.965 }, 00:25:22.965 "queue_depth": 1024, 00:25:22.965 "io_size": 4096, 00:25:22.965 "runtime": 10.087157, 00:25:22.965 "iops": 8514.589393225464, 00:25:22.965 "mibps": 33.26011481728697, 00:25:22.965 "io_failed": 0, 00:25:22.965 "io_timeout": 0, 00:25:22.965 "avg_latency_us": 119706.77716298605, 00:25:22.965 "min_latency_us": 29312.465454545454, 00:25:22.965 "max_latency_us": 115343.36 00:25:22.965 } 00:25:22.965 ], 00:25:22.965 "core_count": 1 00:25:22.965 } 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 104382 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 104382 ']' 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 104382 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104382 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:22.965 killing process with pid 104382 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104382' 00:25:22.965 Received shutdown signal, test time was about 10.000000 seconds 00:25:22.965 00:25:22.965 Latency(us) 00:25:22.965 [2024-11-20T09:19:01.884Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.965 [2024-11-20T09:19:01.884Z] =================================================================================================================== 00:25:22.965 [2024-11-20T09:19:01.884Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 104382 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 104382 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:22.965 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:23.224 rmmod nvme_tcp 00:25:23.224 rmmod nvme_fabrics 00:25:23.224 rmmod nvme_keyring 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 104329 ']' 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 104329 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 104329 ']' 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 104329 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104329 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:23.224 killing process with pid 104329 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104329' 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 104329 00:25:23.224 09:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 104329 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # continue 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:25:23.483 00:25:23.483 real 0m13.295s 00:25:23.483 user 0m21.219s 00:25:23.483 sys 0m2.463s 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:23.483 ************************************ 00:25:23.483 END TEST nvmf_queue_depth 00:25:23.483 ************************************ 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.483 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:23.744 ************************************ 00:25:23.744 START TEST nvmf_nmic 00:25:23.744 ************************************ 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:25:23.744 * Looking for test storage... 00:25:23.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:25:23.744 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:23.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.745 --rc genhtml_branch_coverage=1 00:25:23.745 --rc genhtml_function_coverage=1 00:25:23.745 --rc genhtml_legend=1 00:25:23.745 --rc geninfo_all_blocks=1 00:25:23.745 --rc geninfo_unexecuted_blocks=1 00:25:23.745 00:25:23.745 ' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:23.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.745 --rc genhtml_branch_coverage=1 00:25:23.745 --rc genhtml_function_coverage=1 00:25:23.745 --rc genhtml_legend=1 00:25:23.745 --rc geninfo_all_blocks=1 00:25:23.745 --rc geninfo_unexecuted_blocks=1 00:25:23.745 00:25:23.745 ' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:23.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.745 --rc genhtml_branch_coverage=1 00:25:23.745 --rc genhtml_function_coverage=1 00:25:23.745 --rc genhtml_legend=1 00:25:23.745 --rc geninfo_all_blocks=1 00:25:23.745 --rc geninfo_unexecuted_blocks=1 00:25:23.745 00:25:23.745 ' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:23.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:23.745 --rc genhtml_branch_coverage=1 00:25:23.745 --rc genhtml_function_coverage=1 00:25:23.745 --rc genhtml_legend=1 00:25:23.745 --rc geninfo_all_blocks=1 00:25:23.745 --rc geninfo_unexecuted_blocks=1 00:25:23.745 00:25:23.745 ' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:23.745 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@223 -- # create_target_ns 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target0 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:23.746 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:24.006 10.0.0.1 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:24.006 10.0.0.2 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:24.006 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@151 -- # set_up target1 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772163 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:24.007 10.0.0.3 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772164 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:24.007 10.0.0.4 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:24.007 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:24.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:25:24.008 00:25:24.008 --- 10.0.0.1 ping statistics --- 00:25:24.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.008 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:24.008 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:24.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:25:24.268 00:25:24.268 --- 10.0.0.2 ping statistics --- 00:25:24.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.268 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:24.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:24.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:25:24.268 00:25:24.268 --- 10.0.0.3 ping statistics --- 00:25:24.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.268 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:24.268 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:24.268 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:25:24.268 00:25:24.268 --- 10.0.0.4 ping statistics --- 00:25:24.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.268 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # return 0 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:24.268 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator0 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo initiator1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:24.269 09:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target0 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target0 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo target1 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=target1 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=104742 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 104742 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 104742 ']' 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.269 09:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:24.269 [2024-11-20 09:19:03.118776] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:24.269 [2024-11-20 09:19:03.120034] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:24.269 [2024-11-20 09:19:03.120108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.528 [2024-11-20 09:19:03.275203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.528 [2024-11-20 09:19:03.347576] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.528 [2024-11-20 09:19:03.347637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.528 [2024-11-20 09:19:03.347658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.528 [2024-11-20 09:19:03.347668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.528 [2024-11-20 09:19:03.347678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.528 [2024-11-20 09:19:03.348895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.528 [2024-11-20 09:19:03.348990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.528 [2024-11-20 09:19:03.349136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.528 [2024-11-20 09:19:03.349143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.786 [2024-11-20 09:19:03.446404] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:24.786 [2024-11-20 09:19:03.446595] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:24.786 [2024-11-20 09:19:03.447523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:24.786 [2024-11-20 09:19:03.448081] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:24.786 [2024-11-20 09:19:03.448562] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.353 [2024-11-20 09:19:04.170213] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.353 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.354 Malloc0 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.354 [2024-11-20 09:19:04.254570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:25:25.354 test case1: single bdev can't be used in multiple subsystems 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.354 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.612 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.612 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:25:25.612 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:25:25.612 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.612 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.612 [2024-11-20 09:19:04.278077] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:25:25.612 [2024-11-20 09:19:04.278127] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:25:25.613 [2024-11-20 09:19:04.278142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:25.613 2024/11/20 09:19:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:25.613 request: 00:25:25.613 { 00:25:25.613 "method": "nvmf_subsystem_add_ns", 00:25:25.613 "params": { 00:25:25.613 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:25:25.613 "namespace": { 00:25:25.613 "bdev_name": "Malloc0", 00:25:25.613 "no_auto_visible": false 00:25:25.613 } 00:25:25.613 } 00:25:25.613 } 00:25:25.613 Got JSON-RPC error response 00:25:25.613 GoRPCClient: error on JSON-RPC call 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:25:25.613 Adding namespace failed - expected result. 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:25:25.613 test case2: host connect to nvmf target in multiple paths 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:25.613 [2024-11-20 09:19:04.290230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:25.613 09:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.141 09:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.141 09:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.141 09:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:28.141 09:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:28.141 09:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.141 09:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:25:28.141 09:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:28.141 [global] 00:25:28.141 thread=1 00:25:28.141 invalidate=1 00:25:28.141 rw=write 00:25:28.141 time_based=1 00:25:28.141 runtime=1 00:25:28.141 ioengine=libaio 00:25:28.141 direct=1 00:25:28.141 bs=4096 00:25:28.141 iodepth=1 00:25:28.141 norandommap=0 00:25:28.141 numjobs=1 00:25:28.141 00:25:28.141 verify_dump=1 00:25:28.141 verify_backlog=512 00:25:28.141 verify_state_save=0 00:25:28.141 do_verify=1 00:25:28.141 verify=crc32c-intel 00:25:28.141 [job0] 00:25:28.141 filename=/dev/nvme0n1 00:25:28.141 Could not set queue depth (nvme0n1) 00:25:28.141 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:28.141 fio-3.35 00:25:28.141 Starting 1 thread 00:25:29.076 00:25:29.076 job0: (groupid=0, jobs=1): err= 0: pid=104848: Wed Nov 20 09:19:07 2024 00:25:29.076 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:25:29.076 slat (nsec): min=14280, max=71427, avg=22354.91, stdev=5339.05 00:25:29.076 clat (usec): min=161, max=257, avg=182.93, stdev=10.54 00:25:29.076 lat (usec): min=182, max=279, avg=205.29, stdev=11.69 00:25:29.076 clat percentiles (usec): 00:25:29.076 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 172], 20.00th=[ 176], 00:25:29.076 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 184], 00:25:29.076 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 200], 00:25:29.076 | 99.00th=[ 221], 99.50th=[ 233], 99.90th=[ 243], 99.95th=[ 255], 00:25:29.076 | 99.99th=[ 258] 00:25:29.076 write: IOPS=2877, BW=11.2MiB/s (11.8MB/s)(11.2MiB/1001msec); 0 zone resets 00:25:29.076 slat (usec): min=19, max=151, avg=31.13, stdev= 7.70 00:25:29.076 clat (usec): min=108, max=293, avg=129.13, stdev= 9.89 00:25:29.076 lat (usec): min=137, max=444, avg=160.26, stdev=12.10 00:25:29.076 clat percentiles (usec): 00:25:29.076 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 119], 20.00th=[ 122], 00:25:29.076 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 130], 00:25:29.076 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 147], 00:25:29.076 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 176], 99.95th=[ 176], 00:25:29.076 | 99.99th=[ 293] 00:25:29.076 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:25:29.076 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:29.076 lat (usec) : 250=99.94%, 500=0.06% 00:25:29.076 cpu : usr=2.60%, sys=11.20%, ctx=5440, majf=0, minf=5 00:25:29.076 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:29.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.076 issued rwts: total=2560,2880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.076 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:29.076 00:25:29.076 Run status group 0 (all jobs): 00:25:29.076 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:25:29.076 WRITE: bw=11.2MiB/s (11.8MB/s), 11.2MiB/s-11.2MiB/s (11.8MB/s-11.8MB/s), io=11.2MiB (11.8MB), run=1001-1001msec 00:25:29.076 00:25:29.076 Disk stats (read/write): 00:25:29.076 nvme0n1: ios=2345/2560, merge=0/0, ticks=465/374, in_queue=839, util=91.06% 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:29.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:29.076 rmmod nvme_tcp 00:25:29.076 rmmod nvme_fabrics 00:25:29.076 rmmod nvme_keyring 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 104742 ']' 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 104742 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 104742 ']' 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 104742 00:25:29.076 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:25:29.334 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.334 09:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104742 00:25:29.334 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.334 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.334 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104742' 00:25:29.334 killing process with pid 104742 00:25:29.334 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 104742 00:25:29.334 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 104742 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # continue 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:25:29.593 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:25:29.594 00:25:29.594 real 0m6.027s 00:25:29.594 user 0m14.876s 00:25:29.594 sys 0m2.343s 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:25:29.594 ************************************ 00:25:29.594 END TEST nvmf_nmic 00:25:29.594 ************************************ 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:29.594 ************************************ 00:25:29.594 START TEST nvmf_fio_target 00:25:29.594 ************************************ 00:25:29.594 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:25:29.853 * Looking for test storage... 00:25:29.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.853 --rc genhtml_branch_coverage=1 00:25:29.853 --rc genhtml_function_coverage=1 00:25:29.853 --rc genhtml_legend=1 00:25:29.853 --rc geninfo_all_blocks=1 00:25:29.853 --rc geninfo_unexecuted_blocks=1 00:25:29.853 00:25:29.853 ' 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.853 --rc genhtml_branch_coverage=1 00:25:29.853 --rc genhtml_function_coverage=1 00:25:29.853 --rc genhtml_legend=1 00:25:29.853 --rc geninfo_all_blocks=1 00:25:29.853 --rc geninfo_unexecuted_blocks=1 00:25:29.853 00:25:29.853 ' 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.853 --rc genhtml_branch_coverage=1 00:25:29.853 --rc genhtml_function_coverage=1 00:25:29.853 --rc genhtml_legend=1 00:25:29.853 --rc geninfo_all_blocks=1 00:25:29.853 --rc geninfo_unexecuted_blocks=1 00:25:29.853 00:25:29.853 ' 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:29.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:29.853 --rc genhtml_branch_coverage=1 00:25:29.853 --rc genhtml_function_coverage=1 00:25:29.853 --rc genhtml_legend=1 00:25:29.853 --rc geninfo_all_blocks=1 00:25:29.853 --rc geninfo_unexecuted_blocks=1 00:25:29.853 00:25:29.853 ' 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.853 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@223 -- # create_target_ns 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:29.854 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target0 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:29.855 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:30.115 10.0.0.1 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:30.115 10.0.0.2 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:30.115 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@151 -- # set_up target1 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772163 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:30.116 10.0.0.3 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772164 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:30.116 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:30.117 10.0.0.4 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:30.117 09:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:30.117 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:30.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:30.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:25:30.117 00:25:30.117 --- 10.0.0.1 ping statistics --- 00:25:30.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.118 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:30.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:30.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:25:30.377 00:25:30.377 --- 10.0.0.2 ping statistics --- 00:25:30.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.377 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:30.377 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:30.378 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:30.378 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:25:30.378 00:25:30.378 --- 10.0.0.3 ping statistics --- 00:25:30.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.378 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:30.378 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:30.378 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:25:30.378 00:25:30.378 --- 10.0.0.4 ping statistics --- 00:25:30.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:30.378 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # return 0 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator0 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:30.378 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo initiator1 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target0 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target0 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo target1 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=target1 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=105076 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 105076 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105076 ']' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.379 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.379 [2024-11-20 09:19:09.232837] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:30.379 [2024-11-20 09:19:09.233889] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:30.379 [2024-11-20 09:19:09.233957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.637 [2024-11-20 09:19:09.379667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:30.637 [2024-11-20 09:19:09.444973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.637 [2024-11-20 09:19:09.445034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.637 [2024-11-20 09:19:09.445046] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.637 [2024-11-20 09:19:09.445054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.637 [2024-11-20 09:19:09.445062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.637 [2024-11-20 09:19:09.446159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.637 [2024-11-20 09:19:09.446224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.637 [2024-11-20 09:19:09.446316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.637 [2024-11-20 09:19:09.446320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.637 [2024-11-20 09:19:09.542152] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:30.637 [2024-11-20 09:19:09.542614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:30.637 [2024-11-20 09:19:09.542874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:30.638 [2024-11-20 09:19:09.543112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:30.638 [2024-11-20 09:19:09.543456] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:30.896 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.896 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:25:30.896 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:30.896 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.896 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.896 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.896 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:31.155 [2024-11-20 09:19:09.931104] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.155 09:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:31.428 09:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:25:31.428 09:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:31.998 09:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:25:31.998 09:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:32.256 09:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:25:32.256 09:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:32.823 09:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:25:32.823 09:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:25:33.081 09:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.339 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:25:33.339 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.597 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:25:33.598 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:33.855 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:25:33.855 09:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:25:34.113 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:34.371 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:34.371 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.937 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:25:34.937 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:34.937 09:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.195 [2024-11-20 09:19:14.075019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.195 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:25:35.453 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:25:36.019 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:36.019 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:25:36.019 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:25:36.019 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.019 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:25:36.019 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:25:36.019 09:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.999 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.999 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.999 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:37.999 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:25:37.999 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.999 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:25:37.999 09:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:25:37.999 [global] 00:25:38.000 thread=1 00:25:38.000 invalidate=1 00:25:38.000 rw=write 00:25:38.000 time_based=1 00:25:38.000 runtime=1 00:25:38.000 ioengine=libaio 00:25:38.000 direct=1 00:25:38.000 bs=4096 00:25:38.000 iodepth=1 00:25:38.000 norandommap=0 00:25:38.000 numjobs=1 00:25:38.000 00:25:38.000 verify_dump=1 00:25:38.000 verify_backlog=512 00:25:38.000 verify_state_save=0 00:25:38.000 do_verify=1 00:25:38.000 verify=crc32c-intel 00:25:38.000 [job0] 00:25:38.000 filename=/dev/nvme0n1 00:25:38.000 [job1] 00:25:38.000 filename=/dev/nvme0n2 00:25:38.000 [job2] 00:25:38.000 filename=/dev/nvme0n3 00:25:38.000 [job3] 00:25:38.000 filename=/dev/nvme0n4 00:25:38.000 Could not set queue depth (nvme0n1) 00:25:38.000 Could not set queue depth (nvme0n2) 00:25:38.000 Could not set queue depth (nvme0n3) 00:25:38.000 Could not set queue depth (nvme0n4) 00:25:38.264 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:38.264 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:38.264 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:38.264 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:38.264 fio-3.35 00:25:38.264 Starting 4 threads 00:25:39.197 00:25:39.197 job0: (groupid=0, jobs=1): err= 0: pid=105361: Wed Nov 20 09:19:18 2024 00:25:39.197 read: IOPS=2061, BW=8248KiB/s (8446kB/s)(8256KiB/1001msec) 00:25:39.197 slat (nsec): min=12420, max=54780, avg=20207.24, stdev=7152.86 00:25:39.197 clat (usec): min=163, max=2597, avg=216.14, stdev=70.33 00:25:39.197 lat (usec): min=177, max=2623, avg=236.34, stdev=69.20 00:25:39.197 clat percentiles (usec): 00:25:39.197 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:25:39.197 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:25:39.197 | 70.00th=[ 204], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:25:39.197 | 99.00th=[ 330], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 416], 00:25:39.197 | 99.99th=[ 2606] 00:25:39.197 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:25:39.197 slat (nsec): min=18432, max=91049, avg=27421.51, stdev=9686.58 00:25:39.197 clat (usec): min=114, max=1003, avg=168.90, stdev=44.52 00:25:39.197 lat (usec): min=134, max=1043, avg=196.33, stdev=46.11 00:25:39.197 clat percentiles (usec): 00:25:39.197 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 131], 00:25:39.197 | 30.00th=[ 135], 40.00th=[ 141], 50.00th=[ 147], 60.00th=[ 174], 00:25:39.197 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 235], 00:25:39.197 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 277], 99.95th=[ 334], 00:25:39.197 | 99.99th=[ 1004] 00:25:39.197 bw ( KiB/s): min= 8192, max= 8192, per=22.43%, avg=8192.00, stdev= 0.00, samples=1 00:25:39.197 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:39.197 lat (usec) : 250=88.58%, 500=11.38% 00:25:39.197 lat (msec) : 2=0.02%, 4=0.02% 00:25:39.197 cpu : usr=2.30%, sys=8.30%, ctx=4628, majf=0, minf=11 00:25:39.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.197 issued rwts: total=2064,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.197 job1: (groupid=0, jobs=1): err= 0: pid=105362: Wed Nov 20 09:19:18 2024 00:25:39.197 read: IOPS=2427, BW=9708KiB/s (9941kB/s)(9708KiB/1000msec) 00:25:39.197 slat (nsec): min=13669, max=33856, avg=16454.84, stdev=3073.04 00:25:39.197 clat (usec): min=180, max=471, avg=208.17, stdev=15.46 00:25:39.197 lat (usec): min=195, max=487, avg=224.62, stdev=15.90 00:25:39.197 clat percentiles (usec): 00:25:39.197 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:25:39.197 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 208], 00:25:39.197 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 225], 95.00th=[ 231], 00:25:39.197 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 408], 99.95th=[ 445], 00:25:39.197 | 99.99th=[ 474] 00:25:39.197 write: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec); 0 zone resets 00:25:39.197 slat (nsec): min=18875, max=90159, avg=22678.03, stdev=4648.32 00:25:39.197 clat (usec): min=123, max=319, avg=151.67, stdev=15.81 00:25:39.197 lat (usec): min=143, max=376, avg=174.35, stdev=18.46 00:25:39.197 clat percentiles (usec): 00:25:39.197 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:25:39.197 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:25:39.197 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 174], 95.00th=[ 182], 00:25:39.197 | 99.00th=[ 200], 99.50th=[ 210], 99.90th=[ 253], 99.95th=[ 285], 00:25:39.197 | 99.99th=[ 322] 00:25:39.197 bw ( KiB/s): min=11840, max=11840, per=32.41%, avg=11840.00, stdev= 0.00, samples=1 00:25:39.197 iops : min= 2960, max= 2960, avg=2960.00, stdev= 0.00, samples=1 00:25:39.197 lat (usec) : 250=99.40%, 500=0.60% 00:25:39.197 cpu : usr=1.80%, sys=7.20%, ctx=4992, majf=0, minf=9 00:25:39.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.197 issued rwts: total=2427,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.197 job2: (groupid=0, jobs=1): err= 0: pid=105363: Wed Nov 20 09:19:18 2024 00:25:39.197 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:39.197 slat (nsec): min=11457, max=59290, avg=16640.76, stdev=4470.23 00:25:39.197 clat (usec): min=179, max=678, avg=306.76, stdev=35.03 00:25:39.197 lat (usec): min=193, max=691, avg=323.40, stdev=35.01 00:25:39.197 clat percentiles (usec): 00:25:39.197 | 1.00th=[ 217], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:25:39.197 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 310], 00:25:39.197 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 355], 00:25:39.197 | 99.00th=[ 429], 99.50th=[ 529], 99.90th=[ 603], 99.95th=[ 676], 00:25:39.197 | 99.99th=[ 676] 00:25:39.197 write: IOPS=1971, BW=7884KiB/s (8073kB/s)(7892KiB/1001msec); 0 zone resets 00:25:39.197 slat (nsec): min=12660, max=67024, avg=25747.31, stdev=8827.01 00:25:39.197 clat (usec): min=132, max=909, avg=225.89, stdev=38.59 00:25:39.197 lat (usec): min=162, max=926, avg=251.64, stdev=38.05 00:25:39.197 clat percentiles (usec): 00:25:39.197 | 1.00th=[ 151], 5.00th=[ 172], 10.00th=[ 188], 20.00th=[ 202], 00:25:39.197 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 229], 00:25:39.197 | 70.00th=[ 237], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 285], 00:25:39.197 | 99.00th=[ 310], 99.50th=[ 326], 99.90th=[ 570], 99.95th=[ 914], 00:25:39.197 | 99.99th=[ 914] 00:25:39.197 bw ( KiB/s): min= 8192, max= 8192, per=22.43%, avg=8192.00, stdev= 0.00, samples=1 00:25:39.197 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:39.197 lat (usec) : 250=45.00%, 500=54.63%, 750=0.34%, 1000=0.03% 00:25:39.197 cpu : usr=1.50%, sys=6.00%, ctx=3510, majf=0, minf=13 00:25:39.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.198 issued rwts: total=1536,1973,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.198 job3: (groupid=0, jobs=1): err= 0: pid=105364: Wed Nov 20 09:19:18 2024 00:25:39.198 read: IOPS=1791, BW=7165KiB/s (7337kB/s)(7172KiB/1001msec) 00:25:39.198 slat (nsec): min=11452, max=53364, avg=18032.59, stdev=5795.76 00:25:39.198 clat (usec): min=177, max=909, avg=276.19, stdev=57.38 00:25:39.198 lat (usec): min=192, max=926, avg=294.23, stdev=57.04 00:25:39.198 clat percentiles (usec): 00:25:39.198 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 208], 00:25:39.198 | 30.00th=[ 258], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 297], 00:25:39.198 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 347], 00:25:39.198 | 99.00th=[ 375], 99.50th=[ 537], 99.90th=[ 668], 99.95th=[ 914], 00:25:39.198 | 99.99th=[ 914] 00:25:39.198 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:25:39.198 slat (nsec): min=12873, max=79507, avg=27931.50, stdev=12854.92 00:25:39.198 clat (usec): min=128, max=565, avg=198.87, stdev=51.53 00:25:39.198 lat (usec): min=150, max=590, avg=226.80, stdev=48.76 00:25:39.198 clat percentiles (usec): 00:25:39.198 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:25:39.198 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 182], 60.00th=[ 204], 00:25:39.198 | 70.00th=[ 227], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 285], 00:25:39.198 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 494], 99.95th=[ 515], 00:25:39.198 | 99.99th=[ 570] 00:25:39.198 bw ( KiB/s): min= 8952, max= 8952, per=24.51%, avg=8952.00, stdev= 0.00, samples=1 00:25:39.198 iops : min= 2238, max= 2238, avg=2238.00, stdev= 0.00, samples=1 00:25:39.198 lat (usec) : 250=55.12%, 500=44.57%, 750=0.29%, 1000=0.03% 00:25:39.198 cpu : usr=1.90%, sys=6.70%, ctx=3841, majf=0, minf=5 00:25:39.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:39.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.198 issued rwts: total=1793,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:39.198 00:25:39.198 Run status group 0 (all jobs): 00:25:39.198 READ: bw=30.5MiB/s (32.0MB/s), 6138KiB/s-9708KiB/s (6285kB/s-9941kB/s), io=30.5MiB (32.0MB), run=1000-1001msec 00:25:39.198 WRITE: bw=35.7MiB/s (37.4MB/s), 7884KiB/s-10.0MiB/s (8073kB/s-10.5MB/s), io=35.7MiB (37.4MB), run=1000-1001msec 00:25:39.198 00:25:39.198 Disk stats (read/write): 00:25:39.198 nvme0n1: ios=1884/2048, merge=0/0, ticks=443/381, in_queue=824, util=89.17% 00:25:39.198 nvme0n2: ios=2097/2309, merge=0/0, ticks=510/368, in_queue=878, util=94.14% 00:25:39.198 nvme0n3: ios=1499/1536, merge=0/0, ticks=470/352, in_queue=822, util=89.74% 00:25:39.198 nvme0n4: ios=1592/1884, merge=0/0, ticks=494/391, in_queue=885, util=93.92% 00:25:39.198 09:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:25:39.198 [global] 00:25:39.198 thread=1 00:25:39.198 invalidate=1 00:25:39.198 rw=randwrite 00:25:39.198 time_based=1 00:25:39.198 runtime=1 00:25:39.198 ioengine=libaio 00:25:39.198 direct=1 00:25:39.198 bs=4096 00:25:39.198 iodepth=1 00:25:39.198 norandommap=0 00:25:39.198 numjobs=1 00:25:39.198 00:25:39.198 verify_dump=1 00:25:39.198 verify_backlog=512 00:25:39.198 verify_state_save=0 00:25:39.198 do_verify=1 00:25:39.198 verify=crc32c-intel 00:25:39.198 [job0] 00:25:39.198 filename=/dev/nvme0n1 00:25:39.198 [job1] 00:25:39.198 filename=/dev/nvme0n2 00:25:39.198 [job2] 00:25:39.198 filename=/dev/nvme0n3 00:25:39.455 [job3] 00:25:39.455 filename=/dev/nvme0n4 00:25:39.455 Could not set queue depth (nvme0n1) 00:25:39.455 Could not set queue depth (nvme0n2) 00:25:39.455 Could not set queue depth (nvme0n3) 00:25:39.455 Could not set queue depth (nvme0n4) 00:25:39.455 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:39.455 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:39.456 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:39.456 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:39.456 fio-3.35 00:25:39.456 Starting 4 threads 00:25:40.827 00:25:40.827 job0: (groupid=0, jobs=1): err= 0: pid=105417: Wed Nov 20 09:19:19 2024 00:25:40.827 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:25:40.827 slat (nsec): min=13914, max=63062, avg=25479.46, stdev=6011.52 00:25:40.827 clat (usec): min=167, max=3224, avg=231.20, stdev=99.08 00:25:40.827 lat (usec): min=183, max=3252, avg=256.68, stdev=100.58 00:25:40.827 clat percentiles (usec): 00:25:40.827 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:25:40.827 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 208], 00:25:40.827 | 70.00th=[ 235], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 322], 00:25:40.827 | 99.00th=[ 388], 99.50th=[ 441], 99.90th=[ 611], 99.95th=[ 2376], 00:25:40.827 | 99.99th=[ 3228] 00:25:40.827 write: IOPS=2160, BW=8640KiB/s (8847kB/s)(8640KiB/1000msec); 0 zone resets 00:25:40.827 slat (usec): min=19, max=113, avg=34.04, stdev= 8.92 00:25:40.827 clat (usec): min=114, max=3194, avg=180.16, stdev=81.47 00:25:40.827 lat (usec): min=138, max=3243, avg=214.20, stdev=84.93 00:25:40.827 clat percentiles (usec): 00:25:40.827 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 130], 20.00th=[ 135], 00:25:40.827 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 157], 60.00th=[ 204], 00:25:40.827 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 243], 00:25:40.827 | 99.00th=[ 285], 99.50th=[ 355], 99.90th=[ 660], 99.95th=[ 914], 00:25:40.827 | 99.99th=[ 3195] 00:25:40.827 bw ( KiB/s): min= 8192, max= 8192, per=23.41%, avg=8192.00, stdev= 0.00, samples=1 00:25:40.827 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:40.827 lat (usec) : 250=84.55%, 500=15.26%, 750=0.10%, 1000=0.02% 00:25:40.827 lat (msec) : 4=0.07% 00:25:40.827 cpu : usr=3.20%, sys=8.50%, ctx=4208, majf=0, minf=7 00:25:40.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.828 issued rwts: total=2048,2160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:40.828 job1: (groupid=0, jobs=1): err= 0: pid=105418: Wed Nov 20 09:19:19 2024 00:25:40.828 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:25:40.828 slat (usec): min=11, max=983, avg=16.55, stdev=21.81 00:25:40.828 clat (usec): min=143, max=625, avg=241.59, stdev=59.73 00:25:40.828 lat (usec): min=178, max=1127, avg=258.14, stdev=62.38 00:25:40.828 clat percentiles (usec): 00:25:40.828 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 184], 00:25:40.828 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 208], 60.00th=[ 285], 00:25:40.828 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:25:40.828 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 490], 99.95th=[ 515], 00:25:40.828 | 99.99th=[ 627] 00:25:40.828 write: IOPS=2364, BW=9459KiB/s (9686kB/s)(9468KiB/1001msec); 0 zone resets 00:25:40.828 slat (nsec): min=11605, max=96407, avg=23744.36, stdev=6470.28 00:25:40.828 clat (usec): min=112, max=506, avg=171.82, stdev=52.58 00:25:40.828 lat (usec): min=135, max=526, avg=195.56, stdev=52.50 00:25:40.828 clat percentiles (usec): 00:25:40.828 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 131], 00:25:40.828 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 153], 00:25:40.828 | 70.00th=[ 208], 80.00th=[ 225], 90.00th=[ 258], 95.00th=[ 281], 00:25:40.828 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 334], 99.95th=[ 343], 00:25:40.828 | 99.99th=[ 506] 00:25:40.828 bw ( KiB/s): min=12288, max=12288, per=35.11%, avg=12288.00, stdev= 0.00, samples=1 00:25:40.828 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:25:40.828 lat (usec) : 250=71.42%, 500=28.52%, 750=0.07% 00:25:40.828 cpu : usr=1.80%, sys=6.70%, ctx=4418, majf=0, minf=23 00:25:40.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.828 issued rwts: total=2048,2367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:40.828 job2: (groupid=0, jobs=1): err= 0: pid=105419: Wed Nov 20 09:19:19 2024 00:25:40.828 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:25:40.828 slat (nsec): min=13859, max=77250, avg=22639.04, stdev=9679.92 00:25:40.828 clat (usec): min=187, max=421, avg=224.78, stdev=19.46 00:25:40.828 lat (usec): min=202, max=452, avg=247.42, stdev=25.09 00:25:40.828 clat percentiles (usec): 00:25:40.828 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:25:40.828 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:25:40.828 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 265], 00:25:40.828 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 334], 99.95th=[ 412], 00:25:40.828 | 99.99th=[ 420] 00:25:40.828 write: IOPS=2329, BW=9319KiB/s (9542kB/s)(9328KiB/1001msec); 0 zone resets 00:25:40.828 slat (usec): min=19, max=606, avg=32.06, stdev=16.83 00:25:40.828 clat (usec): min=4, max=1764, avg=174.86, stdev=43.39 00:25:40.828 lat (usec): min=156, max=1785, avg=206.92, stdev=46.09 00:25:40.828 clat percentiles (usec): 00:25:40.828 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:25:40.828 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:25:40.828 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 217], 00:25:40.828 | 99.00th=[ 253], 99.50th=[ 297], 99.90th=[ 553], 99.95th=[ 725], 00:25:40.828 | 99.99th=[ 1762] 00:25:40.828 bw ( KiB/s): min=10120, max=10120, per=28.92%, avg=10120.00, stdev= 0.00, samples=1 00:25:40.828 iops : min= 2530, max= 2530, avg=2530.00, stdev= 0.00, samples=1 00:25:40.828 lat (usec) : 10=0.02%, 250=94.98%, 500=4.93%, 750=0.05% 00:25:40.828 lat (msec) : 2=0.02% 00:25:40.828 cpu : usr=2.60%, sys=8.70%, ctx=4382, majf=0, minf=10 00:25:40.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.828 issued rwts: total=2048,2332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:40.828 job3: (groupid=0, jobs=1): err= 0: pid=105420: Wed Nov 20 09:19:19 2024 00:25:40.828 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:25:40.828 slat (nsec): min=10978, max=47272, avg=19404.39, stdev=7060.92 00:25:40.828 clat (usec): min=212, max=1171, avg=304.75, stdev=31.65 00:25:40.828 lat (usec): min=235, max=1200, avg=324.15, stdev=33.38 00:25:40.828 clat percentiles (usec): 00:25:40.828 | 1.00th=[ 265], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:25:40.828 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:25:40.828 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 326], 95.00th=[ 334], 00:25:40.828 | 99.00th=[ 383], 99.50th=[ 457], 99.90th=[ 578], 99.95th=[ 1172], 00:25:40.828 | 99.99th=[ 1172] 00:25:40.828 write: IOPS=1897, BW=7588KiB/s (7771kB/s)(7596KiB/1001msec); 0 zone resets 00:25:40.828 slat (usec): min=10, max=113, avg=30.43, stdev= 8.86 00:25:40.828 clat (usec): min=133, max=1146, avg=229.66, stdev=39.43 00:25:40.828 lat (usec): min=160, max=1179, avg=260.09, stdev=39.54 00:25:40.828 clat percentiles (usec): 00:25:40.828 | 1.00th=[ 157], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 210], 00:25:40.828 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:25:40.828 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 269], 95.00th=[ 289], 00:25:40.828 | 99.00th=[ 334], 99.50th=[ 383], 99.90th=[ 668], 99.95th=[ 1139], 00:25:40.828 | 99.99th=[ 1139] 00:25:40.828 bw ( KiB/s): min= 8192, max= 8192, per=23.41%, avg=8192.00, stdev= 0.00, samples=1 00:25:40.828 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:25:40.828 lat (usec) : 250=45.41%, 500=54.44%, 750=0.09% 00:25:40.828 lat (msec) : 2=0.06% 00:25:40.828 cpu : usr=2.00%, sys=6.30%, ctx=3437, majf=0, minf=7 00:25:40.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:40.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.828 issued rwts: total=1536,1899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.828 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:40.828 00:25:40.828 Run status group 0 (all jobs): 00:25:40.828 READ: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-8192KiB/s (6285kB/s-8389kB/s), io=30.0MiB (31.5MB), run=1000-1001msec 00:25:40.828 WRITE: bw=34.2MiB/s (35.8MB/s), 7588KiB/s-9459KiB/s (7771kB/s-9686kB/s), io=34.2MiB (35.9MB), run=1000-1001msec 00:25:40.828 00:25:40.828 Disk stats (read/write): 00:25:40.828 nvme0n1: ios=1586/2029, merge=0/0, ticks=430/397, in_queue=827, util=89.08% 00:25:40.828 nvme0n2: ios=1971/2048, merge=0/0, ticks=535/338, in_queue=873, util=90.90% 00:25:40.828 nvme0n3: ios=1815/2048, merge=0/0, ticks=417/379, in_queue=796, util=89.38% 00:25:40.828 nvme0n4: ios=1473/1536, merge=0/0, ticks=494/347, in_queue=841, util=90.79% 00:25:40.828 09:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:25:40.828 [global] 00:25:40.828 thread=1 00:25:40.828 invalidate=1 00:25:40.828 rw=write 00:25:40.828 time_based=1 00:25:40.828 runtime=1 00:25:40.828 ioengine=libaio 00:25:40.828 direct=1 00:25:40.828 bs=4096 00:25:40.828 iodepth=128 00:25:40.828 norandommap=0 00:25:40.828 numjobs=1 00:25:40.828 00:25:40.828 verify_dump=1 00:25:40.828 verify_backlog=512 00:25:40.828 verify_state_save=0 00:25:40.828 do_verify=1 00:25:40.828 verify=crc32c-intel 00:25:40.828 [job0] 00:25:40.828 filename=/dev/nvme0n1 00:25:40.828 [job1] 00:25:40.828 filename=/dev/nvme0n2 00:25:40.828 [job2] 00:25:40.828 filename=/dev/nvme0n3 00:25:40.828 [job3] 00:25:40.828 filename=/dev/nvme0n4 00:25:40.828 Could not set queue depth (nvme0n1) 00:25:40.828 Could not set queue depth (nvme0n2) 00:25:40.828 Could not set queue depth (nvme0n3) 00:25:40.828 Could not set queue depth (nvme0n4) 00:25:40.828 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:40.828 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:40.828 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:40.828 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:40.828 fio-3.35 00:25:40.828 Starting 4 threads 00:25:42.203 00:25:42.203 job0: (groupid=0, jobs=1): err= 0: pid=105482: Wed Nov 20 09:19:20 2024 00:25:42.203 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:25:42.203 slat (usec): min=3, max=8589, avg=117.31, stdev=543.94 00:25:42.203 clat (usec): min=9175, max=39138, avg=15536.65, stdev=6571.70 00:25:42.203 lat (usec): min=9381, max=39154, avg=15653.96, stdev=6620.48 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11338], 00:25:42.203 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[12780], 00:25:42.203 | 70.00th=[13435], 80.00th=[23462], 90.00th=[26870], 95.00th=[28705], 00:25:42.203 | 99.00th=[34341], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:25:42.203 | 99.99th=[39060] 00:25:42.203 write: IOPS=4511, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1005msec); 0 zone resets 00:25:42.203 slat (usec): min=8, max=6324, avg=107.27, stdev=463.75 00:25:42.203 clat (usec): min=1976, max=33100, avg=13975.69, stdev=5487.66 00:25:42.203 lat (usec): min=4601, max=33128, avg=14082.96, stdev=5513.36 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[11076], 00:25:42.203 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:25:42.203 | 70.00th=[12387], 80.00th=[14353], 90.00th=[24511], 95.00th=[25822], 00:25:42.203 | 99.00th=[30278], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:25:42.203 | 99.99th=[33162] 00:25:42.203 bw ( KiB/s): min=13168, max=22072, per=27.58%, avg=17620.00, stdev=6296.08, samples=2 00:25:42.203 iops : min= 3292, max= 5520, avg=4406.00, stdev=1575.43, samples=2 00:25:42.203 lat (msec) : 2=0.01%, 10=7.64%, 20=71.58%, 50=20.78% 00:25:42.203 cpu : usr=3.78%, sys=13.75%, ctx=658, majf=0, minf=1 00:25:42.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:42.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:42.203 issued rwts: total=4096,4534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:42.203 job1: (groupid=0, jobs=1): err= 0: pid=105483: Wed Nov 20 09:19:20 2024 00:25:42.203 read: IOPS=3367, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:25:42.203 slat (usec): min=4, max=8476, avg=137.19, stdev=700.03 00:25:42.203 clat (usec): min=431, max=35664, avg=17240.26, stdev=6777.16 00:25:42.203 lat (usec): min=3093, max=40424, avg=17377.46, stdev=6811.03 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[ 6390], 5.00th=[ 9896], 10.00th=[11207], 20.00th=[11600], 00:25:42.203 | 30.00th=[11863], 40.00th=[12649], 50.00th=[13435], 60.00th=[19792], 00:25:42.203 | 70.00th=[22676], 80.00th=[25035], 90.00th=[26870], 95.00th=[28181], 00:25:42.203 | 99.00th=[31327], 99.50th=[33424], 99.90th=[34866], 99.95th=[35914], 00:25:42.203 | 99.99th=[35914] 00:25:42.203 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:25:42.203 slat (usec): min=12, max=9842, avg=143.13, stdev=647.58 00:25:42.203 clat (usec): min=8817, max=38229, avg=18869.51, stdev=8867.25 00:25:42.203 lat (usec): min=8837, max=38251, avg=19012.65, stdev=8916.69 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[ 9765], 5.00th=[10159], 10.00th=[10552], 20.00th=[11076], 00:25:42.203 | 30.00th=[11600], 40.00th=[12649], 50.00th=[13435], 60.00th=[22414], 00:25:42.203 | 70.00th=[23725], 80.00th=[28181], 90.00th=[33424], 95.00th=[36439], 00:25:42.203 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[38011], 00:25:42.203 | 99.99th=[38011] 00:25:42.203 bw ( KiB/s): min= 9920, max=18752, per=22.44%, avg=14336.00, stdev=6245.17, samples=2 00:25:42.203 iops : min= 2480, max= 4688, avg=3584.00, stdev=1561.29, samples=2 00:25:42.203 lat (usec) : 500=0.01% 00:25:42.203 lat (msec) : 4=0.46%, 10=3.77%, 20=55.63%, 50=40.13% 00:25:42.203 cpu : usr=2.80%, sys=9.10%, ctx=425, majf=0, minf=5 00:25:42.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:42.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:42.203 issued rwts: total=3371,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:42.203 job2: (groupid=0, jobs=1): err= 0: pid=105484: Wed Nov 20 09:19:20 2024 00:25:42.203 read: IOPS=5097, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:25:42.203 slat (usec): min=9, max=3883, avg=96.37, stdev=488.37 00:25:42.203 clat (usec): min=398, max=17734, avg=12602.38, stdev=1241.99 00:25:42.203 lat (usec): min=3630, max=21091, avg=12698.75, stdev=1301.66 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[ 8029], 5.00th=[11076], 10.00th=[11863], 20.00th=[12256], 00:25:42.203 | 30.00th=[12518], 40.00th=[12518], 50.00th=[12649], 60.00th=[12780], 00:25:42.203 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13566], 95.00th=[14222], 00:25:42.203 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16581], 99.95th=[17171], 00:25:42.203 | 99.99th=[17695] 00:25:42.203 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:25:42.203 slat (usec): min=9, max=3607, avg=91.93, stdev=435.61 00:25:42.203 clat (usec): min=8685, max=16649, avg=12152.96, stdev=1294.07 00:25:42.203 lat (usec): min=8715, max=16671, avg=12244.88, stdev=1269.47 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[11600], 00:25:42.203 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:25:42.203 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13435], 95.00th=[13698], 00:25:42.203 | 99.00th=[14877], 99.50th=[15533], 99.90th=[16188], 99.95th=[16319], 00:25:42.203 | 99.99th=[16712] 00:25:42.203 bw ( KiB/s): min=20480, max=20480, per=32.06%, avg=20480.00, stdev= 0.00, samples=2 00:25:42.203 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:25:42.203 lat (usec) : 500=0.01% 00:25:42.203 lat (msec) : 4=0.21%, 10=6.77%, 20=93.01% 00:25:42.203 cpu : usr=3.99%, sys=13.77%, ctx=412, majf=0, minf=1 00:25:42.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:42.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:42.203 issued rwts: total=5113,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:42.203 job3: (groupid=0, jobs=1): err= 0: pid=105485: Wed Nov 20 09:19:20 2024 00:25:42.203 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:25:42.203 slat (usec): min=5, max=7919, avg=185.06, stdev=837.04 00:25:42.203 clat (usec): min=13931, max=35008, avg=24039.90, stdev=3429.16 00:25:42.203 lat (usec): min=17873, max=35023, avg=24224.96, stdev=3423.77 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[17957], 5.00th=[18220], 10.00th=[18744], 20.00th=[21103], 00:25:42.203 | 30.00th=[22152], 40.00th=[23462], 50.00th=[24511], 60.00th=[25035], 00:25:42.203 | 70.00th=[26084], 80.00th=[26870], 90.00th=[28705], 95.00th=[29492], 00:25:42.203 | 99.00th=[30802], 99.50th=[31065], 99.90th=[34866], 99.95th=[34866], 00:25:42.203 | 99.99th=[34866] 00:25:42.203 write: IOPS=2799, BW=10.9MiB/s (11.5MB/s)(11.0MiB/1005msec); 0 zone resets 00:25:42.203 slat (usec): min=11, max=11262, avg=179.93, stdev=854.56 00:25:42.203 clat (usec): min=4659, max=30554, avg=23102.07, stdev=3716.55 00:25:42.203 lat (usec): min=7565, max=30579, avg=23282.00, stdev=3654.50 00:25:42.203 clat percentiles (usec): 00:25:42.203 | 1.00th=[10159], 5.00th=[17957], 10.00th=[18482], 20.00th=[19530], 00:25:42.203 | 30.00th=[21365], 40.00th=[22676], 50.00th=[23200], 60.00th=[24249], 00:25:42.203 | 70.00th=[25035], 80.00th=[26084], 90.00th=[28181], 95.00th=[28705], 00:25:42.203 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:25:42.203 | 99.99th=[30540] 00:25:42.203 bw ( KiB/s): min= 9200, max=12288, per=16.82%, avg=10744.00, stdev=2183.55, samples=2 00:25:42.203 iops : min= 2300, max= 3072, avg=2686.00, stdev=545.89, samples=2 00:25:42.203 lat (msec) : 10=0.41%, 20=18.37%, 50=81.22% 00:25:42.203 cpu : usr=1.39%, sys=8.67%, ctx=369, majf=0, minf=6 00:25:42.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:42.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:42.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:42.203 issued rwts: total=2560,2813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:42.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:42.203 00:25:42.204 Run status group 0 (all jobs): 00:25:42.204 READ: bw=58.8MiB/s (61.7MB/s), 9.95MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=59.1MiB (62.0MB), run=1001-1005msec 00:25:42.204 WRITE: bw=62.4MiB/s (65.4MB/s), 10.9MiB/s-19.9MiB/s (11.5MB/s-20.9MB/s), io=62.7MiB (65.7MB), run=1001-1005msec 00:25:42.204 00:25:42.204 Disk stats (read/write): 00:25:42.204 nvme0n1: ios=3806/4096, merge=0/0, ticks=15877/14308, in_queue=30185, util=89.38% 00:25:42.204 nvme0n2: ios=2609/2975, merge=0/0, ticks=11835/14162, in_queue=25997, util=87.84% 00:25:42.204 nvme0n3: ios=4096/4608, merge=0/0, ticks=16089/15792, in_queue=31881, util=89.02% 00:25:42.204 nvme0n4: ios=2099/2560, merge=0/0, ticks=11900/13181, in_queue=25081, util=89.25% 00:25:42.204 09:19:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:25:42.204 [global] 00:25:42.204 thread=1 00:25:42.204 invalidate=1 00:25:42.204 rw=randwrite 00:25:42.204 time_based=1 00:25:42.204 runtime=1 00:25:42.204 ioengine=libaio 00:25:42.204 direct=1 00:25:42.204 bs=4096 00:25:42.204 iodepth=128 00:25:42.204 norandommap=0 00:25:42.204 numjobs=1 00:25:42.204 00:25:42.204 verify_dump=1 00:25:42.204 verify_backlog=512 00:25:42.204 verify_state_save=0 00:25:42.204 do_verify=1 00:25:42.204 verify=crc32c-intel 00:25:42.204 [job0] 00:25:42.204 filename=/dev/nvme0n1 00:25:42.204 [job1] 00:25:42.204 filename=/dev/nvme0n2 00:25:42.204 [job2] 00:25:42.204 filename=/dev/nvme0n3 00:25:42.204 [job3] 00:25:42.204 filename=/dev/nvme0n4 00:25:42.204 Could not set queue depth (nvme0n1) 00:25:42.204 Could not set queue depth (nvme0n2) 00:25:42.204 Could not set queue depth (nvme0n3) 00:25:42.204 Could not set queue depth (nvme0n4) 00:25:42.204 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:42.204 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:42.204 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:42.204 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:42.204 fio-3.35 00:25:42.204 Starting 4 threads 00:25:43.579 00:25:43.579 job0: (groupid=0, jobs=1): err= 0: pid=105539: Wed Nov 20 09:19:22 2024 00:25:43.579 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:25:43.579 slat (usec): min=6, max=5473, avg=104.72, stdev=478.95 00:25:43.579 clat (usec): min=8174, max=19217, avg=13647.63, stdev=2187.24 00:25:43.579 lat (usec): min=8320, max=20252, avg=13752.35, stdev=2204.96 00:25:43.579 clat percentiles (usec): 00:25:43.579 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10945], 20.00th=[11600], 00:25:43.579 | 30.00th=[12256], 40.00th=[12911], 50.00th=[13698], 60.00th=[14222], 00:25:43.579 | 70.00th=[14746], 80.00th=[15533], 90.00th=[16581], 95.00th=[17433], 00:25:43.579 | 99.00th=[18744], 99.50th=[18744], 99.90th=[19006], 99.95th=[19268], 00:25:43.579 | 99.99th=[19268] 00:25:43.579 write: IOPS=4987, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1003msec); 0 zone resets 00:25:43.579 slat (usec): min=11, max=4741, avg=94.72, stdev=348.20 00:25:43.579 clat (usec): min=2015, max=19020, avg=12766.82, stdev=2318.67 00:25:43.579 lat (usec): min=2039, max=19034, avg=12861.55, stdev=2326.02 00:25:43.579 clat percentiles (usec): 00:25:43.579 | 1.00th=[ 6325], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11076], 00:25:43.579 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12780], 60.00th=[13566], 00:25:43.579 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15533], 95.00th=[15795], 00:25:43.579 | 99.00th=[17695], 99.50th=[18220], 99.90th=[19006], 99.95th=[19006], 00:25:43.579 | 99.99th=[19006] 00:25:43.579 bw ( KiB/s): min=18520, max=20480, per=31.29%, avg=19500.00, stdev=1385.93, samples=2 00:25:43.579 iops : min= 4630, max= 5120, avg=4875.00, stdev=346.48, samples=2 00:25:43.579 lat (msec) : 4=0.29%, 10=7.33%, 20=92.38% 00:25:43.579 cpu : usr=4.89%, sys=15.57%, ctx=606, majf=0, minf=1 00:25:43.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:43.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:43.580 issued rwts: total=4608,5002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:43.580 job1: (groupid=0, jobs=1): err= 0: pid=105540: Wed Nov 20 09:19:22 2024 00:25:43.580 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:25:43.580 slat (usec): min=3, max=7639, avg=176.91, stdev=783.12 00:25:43.580 clat (usec): min=17545, max=31572, avg=23013.25, stdev=2403.29 00:25:43.580 lat (usec): min=17562, max=31590, avg=23190.17, stdev=2448.12 00:25:43.580 clat percentiles (usec): 00:25:43.580 | 1.00th=[17957], 5.00th=[19530], 10.00th=[20055], 20.00th=[21103], 00:25:43.580 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22938], 60.00th=[23462], 00:25:43.580 | 70.00th=[24511], 80.00th=[25035], 90.00th=[26084], 95.00th=[26608], 00:25:43.580 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31065], 99.95th=[31327], 00:25:43.580 | 99.99th=[31589] 00:25:43.580 write: IOPS=2990, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1005msec); 0 zone resets 00:25:43.580 slat (usec): min=9, max=7046, avg=175.13, stdev=684.23 00:25:43.580 clat (usec): min=4177, max=32014, avg=22485.58, stdev=2572.89 00:25:43.580 lat (usec): min=4203, max=32028, avg=22660.71, stdev=2624.41 00:25:43.580 clat percentiles (usec): 00:25:43.580 | 1.00th=[ 9241], 5.00th=[19530], 10.00th=[20841], 20.00th=[21627], 00:25:43.580 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[22938], 00:25:43.580 | 70.00th=[23200], 80.00th=[23462], 90.00th=[24249], 95.00th=[25822], 00:25:43.580 | 99.00th=[29492], 99.50th=[30540], 99.90th=[31065], 99.95th=[31851], 00:25:43.580 | 99.99th=[32113] 00:25:43.580 bw ( KiB/s): min=10736, max=12288, per=18.47%, avg=11512.00, stdev=1097.43, samples=2 00:25:43.580 iops : min= 2684, max= 3072, avg=2878.00, stdev=274.36, samples=2 00:25:43.580 lat (msec) : 10=0.58%, 20=7.26%, 50=92.17% 00:25:43.580 cpu : usr=2.49%, sys=7.47%, ctx=737, majf=0, minf=8 00:25:43.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:43.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:43.580 issued rwts: total=2560,3005,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:43.580 job2: (groupid=0, jobs=1): err= 0: pid=105541: Wed Nov 20 09:19:22 2024 00:25:43.580 read: IOPS=4416, BW=17.3MiB/s (18.1MB/s)(17.3MiB/1004msec) 00:25:43.580 slat (usec): min=7, max=4891, avg=111.09, stdev=514.17 00:25:43.580 clat (usec): min=729, max=19773, avg=14229.01, stdev=2049.49 00:25:43.580 lat (usec): min=3529, max=20055, avg=14340.10, stdev=2070.85 00:25:43.580 clat percentiles (usec): 00:25:43.580 | 1.00th=[ 6980], 5.00th=[11600], 10.00th=[12125], 20.00th=[12649], 00:25:43.580 | 30.00th=[12911], 40.00th=[13698], 50.00th=[14091], 60.00th=[14746], 00:25:43.580 | 70.00th=[15270], 80.00th=[16057], 90.00th=[16909], 95.00th=[17433], 00:25:43.580 | 99.00th=[18482], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:25:43.580 | 99.99th=[19792] 00:25:43.580 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:25:43.580 slat (usec): min=11, max=4601, avg=102.76, stdev=436.89 00:25:43.580 clat (usec): min=8901, max=19517, avg=13814.58, stdev=1644.21 00:25:43.580 lat (usec): min=8926, max=19556, avg=13917.34, stdev=1660.07 00:25:43.580 clat percentiles (usec): 00:25:43.580 | 1.00th=[ 9503], 5.00th=[10028], 10.00th=[11994], 20.00th=[12649], 00:25:43.580 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:25:43.580 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15664], 95.00th=[15795], 00:25:43.580 | 99.00th=[17957], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:25:43.580 | 99.99th=[19530] 00:25:43.580 bw ( KiB/s): min=16384, max=20480, per=29.58%, avg=18432.00, stdev=2896.31, samples=2 00:25:43.580 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:25:43.580 lat (usec) : 750=0.01% 00:25:43.580 lat (msec) : 4=0.14%, 10=2.73%, 20=97.11% 00:25:43.580 cpu : usr=2.89%, sys=13.96%, ctx=526, majf=0, minf=1 00:25:43.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:43.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:43.580 issued rwts: total=4434,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:43.580 job3: (groupid=0, jobs=1): err= 0: pid=105542: Wed Nov 20 09:19:22 2024 00:25:43.580 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.1MiB/1007msec) 00:25:43.580 slat (usec): min=3, max=6473, avg=174.45, stdev=768.28 00:25:43.580 clat (usec): min=5831, max=29720, avg=22348.53, stdev=2667.17 00:25:43.580 lat (usec): min=6568, max=31185, avg=22522.97, stdev=2710.94 00:25:43.580 clat percentiles (usec): 00:25:43.580 | 1.00th=[16712], 5.00th=[18744], 10.00th=[19268], 20.00th=[20055], 00:25:43.580 | 30.00th=[20841], 40.00th=[21627], 50.00th=[22676], 60.00th=[22938], 00:25:43.580 | 70.00th=[23725], 80.00th=[24773], 90.00th=[25297], 95.00th=[26346], 00:25:43.580 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28967], 99.95th=[28967], 00:25:43.580 | 99.99th=[29754] 00:25:43.580 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:25:43.580 slat (usec): min=4, max=7462, avg=172.78, stdev=722.87 00:25:43.580 clat (usec): min=9543, max=30805, avg=22615.73, stdev=2243.63 00:25:43.580 lat (usec): min=10106, max=30823, avg=22788.51, stdev=2322.14 00:25:43.580 clat percentiles (usec): 00:25:43.580 | 1.00th=[14746], 5.00th=[18744], 10.00th=[20579], 20.00th=[21627], 00:25:43.580 | 30.00th=[22152], 40.00th=[22414], 50.00th=[22676], 60.00th=[22938], 00:25:43.580 | 70.00th=[23200], 80.00th=[23725], 90.00th=[24511], 95.00th=[26084], 00:25:43.580 | 99.00th=[28967], 99.50th=[29492], 99.90th=[30278], 99.95th=[30540], 00:25:43.580 | 99.99th=[30802] 00:25:43.580 bw ( KiB/s): min=11392, max=12288, per=19.00%, avg=11840.00, stdev=633.57, samples=2 00:25:43.580 iops : min= 2848, max= 3072, avg=2960.00, stdev=158.39, samples=2 00:25:43.580 lat (msec) : 10=0.28%, 20=12.41%, 50=87.30% 00:25:43.580 cpu : usr=1.89%, sys=8.05%, ctx=765, majf=0, minf=7 00:25:43.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:43.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:43.580 issued rwts: total=2575,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:43.580 00:25:43.580 Run status group 0 (all jobs): 00:25:43.580 READ: bw=55.0MiB/s (57.7MB/s), 9.95MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=55.4MiB (58.1MB), run=1003-1007msec 00:25:43.580 WRITE: bw=60.9MiB/s (63.8MB/s), 11.7MiB/s-19.5MiB/s (12.2MB/s-20.4MB/s), io=61.3MiB (64.3MB), run=1003-1007msec 00:25:43.580 00:25:43.580 Disk stats (read/write): 00:25:43.580 nvme0n1: ios=4146/4183, merge=0/0, ticks=17185/14922, in_queue=32107, util=87.58% 00:25:43.580 nvme0n2: ios=2095/2560, merge=0/0, ticks=15220/17352, in_queue=32572, util=85.96% 00:25:43.580 nvme0n3: ios=3652/4096, merge=0/0, ticks=16181/16184, in_queue=32365, util=88.69% 00:25:43.580 nvme0n4: ios=2141/2560, merge=0/0, ticks=15457/17355, in_queue=32812, util=89.23% 00:25:43.581 09:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:25:43.581 09:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105557 00:25:43.581 09:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:25:43.581 09:19:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:25:43.581 [global] 00:25:43.581 thread=1 00:25:43.581 invalidate=1 00:25:43.581 rw=read 00:25:43.581 time_based=1 00:25:43.581 runtime=10 00:25:43.581 ioengine=libaio 00:25:43.581 direct=1 00:25:43.581 bs=4096 00:25:43.581 iodepth=1 00:25:43.581 norandommap=1 00:25:43.581 numjobs=1 00:25:43.581 00:25:43.581 [job0] 00:25:43.581 filename=/dev/nvme0n1 00:25:43.581 [job1] 00:25:43.581 filename=/dev/nvme0n2 00:25:43.581 [job2] 00:25:43.581 filename=/dev/nvme0n3 00:25:43.581 [job3] 00:25:43.581 filename=/dev/nvme0n4 00:25:43.581 Could not set queue depth (nvme0n1) 00:25:43.581 Could not set queue depth (nvme0n2) 00:25:43.581 Could not set queue depth (nvme0n3) 00:25:43.581 Could not set queue depth (nvme0n4) 00:25:43.581 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:43.581 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:43.581 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:43.581 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:43.581 fio-3.35 00:25:43.581 Starting 4 threads 00:25:46.860 09:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:25:46.860 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=50679808, buflen=4096 00:25:46.860 fio: pid=105600, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:46.860 09:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:25:47.118 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45121536, buflen=4096 00:25:47.118 fio: pid=105599, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:47.118 09:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:47.118 09:19:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:25:47.376 fio: pid=105597, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:47.376 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=54734848, buflen=4096 00:25:47.376 09:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:47.376 09:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:25:47.942 fio: pid=105598, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:25:47.942 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=59514880, buflen=4096 00:25:47.942 00:25:47.942 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105597: Wed Nov 20 09:19:26 2024 00:25:47.942 read: IOPS=3825, BW=14.9MiB/s (15.7MB/s)(52.2MiB/3493msec) 00:25:47.942 slat (usec): min=12, max=15665, avg=26.85, stdev=221.24 00:25:47.942 clat (usec): min=161, max=4160, avg=232.32, stdev=73.45 00:25:47.942 lat (usec): min=177, max=16073, avg=259.17, stdev=235.29 00:25:47.942 clat percentiles (usec): 00:25:47.942 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:25:47.942 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 212], 60.00th=[ 227], 00:25:47.942 | 70.00th=[ 245], 80.00th=[ 269], 90.00th=[ 302], 95.00th=[ 343], 00:25:47.942 | 99.00th=[ 400], 99.50th=[ 445], 99.90th=[ 553], 99.95th=[ 799], 00:25:47.942 | 99.99th=[ 3458] 00:25:47.942 bw ( KiB/s): min=13024, max=18384, per=30.95%, avg=15890.67, stdev=2184.86, samples=6 00:25:47.942 iops : min= 3256, max= 4596, avg=3972.67, stdev=546.22, samples=6 00:25:47.942 lat (usec) : 250=72.16%, 500=27.63%, 750=0.15%, 1000=0.02% 00:25:47.942 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:25:47.942 cpu : usr=1.55%, sys=7.07%, ctx=13374, majf=0, minf=1 00:25:47.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.942 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.942 issued rwts: total=13364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:47.942 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105598: Wed Nov 20 09:19:26 2024 00:25:47.942 read: IOPS=3637, BW=14.2MiB/s (14.9MB/s)(56.8MiB/3995msec) 00:25:47.942 slat (usec): min=8, max=12564, avg=24.10, stdev=180.26 00:25:47.942 clat (usec): min=155, max=3551, avg=248.96, stdev=79.09 00:25:47.942 lat (usec): min=173, max=12931, avg=273.06, stdev=198.58 00:25:47.942 clat percentiles (usec): 00:25:47.942 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:25:47.942 | 30.00th=[ 198], 40.00th=[ 223], 50.00th=[ 239], 60.00th=[ 249], 00:25:47.942 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 338], 95.00th=[ 404], 00:25:47.942 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 603], 99.95th=[ 750], 00:25:47.942 | 99.99th=[ 2442] 00:25:47.942 bw ( KiB/s): min=10584, max=18264, per=27.75%, avg=14248.71, stdev=2959.24, samples=7 00:25:47.942 iops : min= 2646, max= 4566, avg=3562.14, stdev=739.83, samples=7 00:25:47.942 lat (usec) : 250=60.72%, 500=38.18%, 750=1.05%, 1000=0.03% 00:25:47.942 lat (msec) : 4=0.02% 00:25:47.942 cpu : usr=1.48%, sys=5.98%, ctx=14548, majf=0, minf=1 00:25:47.942 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.942 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.942 issued rwts: total=14531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.942 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:47.942 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105599: Wed Nov 20 09:19:26 2024 00:25:47.942 read: IOPS=3420, BW=13.4MiB/s (14.0MB/s)(43.0MiB/3221msec) 00:25:47.942 slat (usec): min=8, max=18664, avg=25.03, stdev=213.01 00:25:47.942 clat (usec): min=113, max=5713, avg=265.13, stdev=97.93 00:25:47.942 lat (usec): min=186, max=19079, avg=290.16, stdev=234.96 00:25:47.942 clat percentiles (usec): 00:25:47.942 | 1.00th=[ 184], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 208], 00:25:47.942 | 30.00th=[ 221], 40.00th=[ 235], 50.00th=[ 247], 60.00th=[ 258], 00:25:47.943 | 70.00th=[ 273], 80.00th=[ 302], 90.00th=[ 363], 95.00th=[ 420], 00:25:47.943 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 734], 99.95th=[ 857], 00:25:47.943 | 99.99th=[ 3294] 00:25:47.943 bw ( KiB/s): min=10768, max=16704, per=27.28%, avg=14009.33, stdev=2453.30, samples=6 00:25:47.943 iops : min= 2692, max= 4176, avg=3502.33, stdev=613.33, samples=6 00:25:47.943 lat (usec) : 250=53.19%, 500=45.25%, 750=1.45%, 1000=0.05% 00:25:47.943 lat (msec) : 2=0.01%, 4=0.03%, 10=0.01% 00:25:47.943 cpu : usr=1.55%, sys=6.02%, ctx=11025, majf=0, minf=1 00:25:47.943 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.943 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.943 issued rwts: total=11017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.943 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:47.943 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105600: Wed Nov 20 09:19:26 2024 00:25:47.943 read: IOPS=4201, BW=16.4MiB/s (17.2MB/s)(48.3MiB/2945msec) 00:25:47.943 slat (usec): min=13, max=661, avg=26.45, stdev= 8.80 00:25:47.943 clat (usec): min=165, max=4173, avg=209.13, stdev=56.71 00:25:47.943 lat (usec): min=186, max=4206, avg=235.58, stdev=57.61 00:25:47.943 clat percentiles (usec): 00:25:47.943 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:25:47.943 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:25:47.943 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 251], 00:25:47.943 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 404], 99.95th=[ 676], 00:25:47.943 | 99.99th=[ 2966] 00:25:47.943 bw ( KiB/s): min=15368, max=17704, per=32.88%, avg=16881.60, stdev=889.75, samples=5 00:25:47.943 iops : min= 3842, max= 4426, avg=4220.40, stdev=222.44, samples=5 00:25:47.943 lat (usec) : 250=94.66%, 500=5.27%, 750=0.02%, 1000=0.01% 00:25:47.943 lat (msec) : 2=0.01%, 4=0.02%, 10=0.01% 00:25:47.943 cpu : usr=1.63%, sys=9.10%, ctx=12376, majf=0, minf=1 00:25:47.943 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:47.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.943 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.943 issued rwts: total=12374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.943 latency : target=0, window=0, percentile=100.00%, depth=1 00:25:47.943 00:25:47.943 Run status group 0 (all jobs): 00:25:47.943 READ: bw=50.1MiB/s (52.6MB/s), 13.4MiB/s-16.4MiB/s (14.0MB/s-17.2MB/s), io=200MiB (210MB), run=2945-3995msec 00:25:47.943 00:25:47.943 Disk stats (read/write): 00:25:47.943 nvme0n1: ios=12917/0, merge=0/0, ticks=3060/0, in_queue=3060, util=94.68% 00:25:47.943 nvme0n2: ios=13873/0, merge=0/0, ticks=3519/0, in_queue=3519, util=95.61% 00:25:47.943 nvme0n3: ios=10717/0, merge=0/0, ticks=2854/0, in_queue=2854, util=95.83% 00:25:47.943 nvme0n4: ios=12017/0, merge=0/0, ticks=2581/0, in_queue=2581, util=96.72% 00:25:47.943 09:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:47.943 09:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:25:48.201 09:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:48.201 09:19:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:25:48.459 09:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:48.460 09:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:25:49.024 09:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:49.024 09:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:25:49.024 09:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:25:49.024 09:19:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105557 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:49.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:49.589 nvmf hotplug test: fio failed as expected 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:25:49.589 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:49.847 rmmod nvme_tcp 00:25:49.847 rmmod nvme_fabrics 00:25:49.847 rmmod nvme_keyring 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 105076 ']' 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 105076 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105076 ']' 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105076 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105076 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:49.847 killing process with pid 105076 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105076' 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105076 00:25:49.847 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105076 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:50.106 09:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:50.106 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # continue 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:25:50.364 ************************************ 00:25:50.364 END TEST nvmf_fio_target 00:25:50.364 ************************************ 00:25:50.364 00:25:50.364 real 0m20.568s 00:25:50.364 user 1m0.710s 00:25:50.364 sys 0m12.737s 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:50.364 ************************************ 00:25:50.364 START TEST nvmf_bdevio 00:25:50.364 ************************************ 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:25:50.364 * Looking for test storage... 00:25:50.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:50.364 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:25:50.623 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:25:50.623 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:25:50.623 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:25:50.623 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.624 --rc genhtml_branch_coverage=1 00:25:50.624 --rc genhtml_function_coverage=1 00:25:50.624 --rc genhtml_legend=1 00:25:50.624 --rc geninfo_all_blocks=1 00:25:50.624 --rc geninfo_unexecuted_blocks=1 00:25:50.624 00:25:50.624 ' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.624 --rc genhtml_branch_coverage=1 00:25:50.624 --rc genhtml_function_coverage=1 00:25:50.624 --rc genhtml_legend=1 00:25:50.624 --rc geninfo_all_blocks=1 00:25:50.624 --rc geninfo_unexecuted_blocks=1 00:25:50.624 00:25:50.624 ' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.624 --rc genhtml_branch_coverage=1 00:25:50.624 --rc genhtml_function_coverage=1 00:25:50.624 --rc genhtml_legend=1 00:25:50.624 --rc geninfo_all_blocks=1 00:25:50.624 --rc geninfo_unexecuted_blocks=1 00:25:50.624 00:25:50.624 ' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:50.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:50.624 --rc genhtml_branch_coverage=1 00:25:50.624 --rc genhtml_function_coverage=1 00:25:50.624 --rc genhtml_legend=1 00:25:50.624 --rc geninfo_all_blocks=1 00:25:50.624 --rc geninfo_unexecuted_blocks=1 00:25:50.624 00:25:50.624 ' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:50.624 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@223 -- # create_target_ns 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target0 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:50.625 10.0.0.1 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:25:50.625 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:50.626 10.0.0.2 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@151 -- # set_up target1 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:50.626 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772163 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:50.886 10.0.0.3 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772164 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:50.886 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:50.887 10.0.0.4 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:50.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:50.887 00:25:50.887 --- 10.0.0.1 ping statistics --- 00:25:50.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.887 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:50.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:25:50.887 00:25:50.887 --- 10.0.0.2 ping statistics --- 00:25:50.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.887 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:50.887 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:50.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:50.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:25:50.888 00:25:50.888 --- 10.0.0.3 ping statistics --- 00:25:50.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.888 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:50.888 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:50.888 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:25:50.888 00:25:50.888 --- 10.0.0.4 ping statistics --- 00:25:50.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.888 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # return 0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target0 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.888 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo target1 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=target1 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=105984 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 105984 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 105984 ']' 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.889 09:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:51.147 [2024-11-20 09:19:29.843827] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:51.147 [2024-11-20 09:19:29.844921] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:51.147 [2024-11-20 09:19:29.844989] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:51.147 [2024-11-20 09:19:29.990658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:51.406 [2024-11-20 09:19:30.081715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:51.406 [2024-11-20 09:19:30.081818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:51.406 [2024-11-20 09:19:30.081842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:51.406 [2024-11-20 09:19:30.081857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:51.406 [2024-11-20 09:19:30.081869] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:51.406 [2024-11-20 09:19:30.083801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:51.406 [2024-11-20 09:19:30.083886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:51.406 [2024-11-20 09:19:30.083948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:51.406 [2024-11-20 09:19:30.083956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:51.406 [2024-11-20 09:19:30.188466] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:51.406 [2024-11-20 09:19:30.188889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:51.406 [2024-11-20 09:19:30.188905] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:51.406 [2024-11-20 09:19:30.189343] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:51.406 [2024-11-20 09:19:30.189974] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:52.071 [2024-11-20 09:19:30.945577] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.071 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:52.329 Malloc0 00:25:52.329 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.329 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.329 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.329 09:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:52.329 [2024-11-20 09:19:31.021877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:52.329 { 00:25:52.329 "params": { 00:25:52.329 "name": "Nvme$subsystem", 00:25:52.329 "trtype": "$TEST_TRANSPORT", 00:25:52.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:52.329 "adrfam": "ipv4", 00:25:52.329 "trsvcid": "$NVMF_PORT", 00:25:52.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:52.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:52.329 "hdgst": ${hdgst:-false}, 00:25:52.329 "ddgst": ${ddgst:-false} 00:25:52.329 }, 00:25:52.329 "method": "bdev_nvme_attach_controller" 00:25:52.329 } 00:25:52.329 EOF 00:25:52.329 )") 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:25:52.329 09:19:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:52.329 "params": { 00:25:52.329 "name": "Nvme1", 00:25:52.329 "trtype": "tcp", 00:25:52.329 "traddr": "10.0.0.2", 00:25:52.329 "adrfam": "ipv4", 00:25:52.329 "trsvcid": "4420", 00:25:52.329 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:52.329 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:52.329 "hdgst": false, 00:25:52.329 "ddgst": false 00:25:52.329 }, 00:25:52.329 "method": "bdev_nvme_attach_controller" 00:25:52.329 }' 00:25:52.329 [2024-11-20 09:19:31.099174] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:52.329 [2024-11-20 09:19:31.099314] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106038 ] 00:25:52.587 [2024-11-20 09:19:31.302435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:52.587 [2024-11-20 09:19:31.386013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.587 [2024-11-20 09:19:31.386106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.587 [2024-11-20 09:19:31.386385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.846 I/O targets: 00:25:52.846 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:52.846 00:25:52.846 00:25:52.846 CUnit - A unit testing framework for C - Version 2.1-3 00:25:52.846 http://cunit.sourceforge.net/ 00:25:52.846 00:25:52.846 00:25:52.846 Suite: bdevio tests on: Nvme1n1 00:25:52.846 Test: blockdev write read block ...passed 00:25:52.846 Test: blockdev write zeroes read block ...passed 00:25:52.846 Test: blockdev write zeroes read no split ...passed 00:25:52.846 Test: blockdev write zeroes read split ...passed 00:25:52.846 Test: blockdev write zeroes read split partial ...passed 00:25:52.846 Test: blockdev reset ...[2024-11-20 09:19:31.697270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:52.846 [2024-11-20 09:19:31.697444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec6f50 (9): Bad file descriptor 00:25:52.846 passed 00:25:52.846 Test: blockdev write read 8 blocks ...[2024-11-20 09:19:31.702397] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:25:52.846 passed 00:25:52.846 Test: blockdev write read size > 128k ...passed 00:25:52.846 Test: blockdev write read invalid size ...passed 00:25:52.846 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:52.846 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:52.846 Test: blockdev write read max offset ...passed 00:25:53.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:53.104 Test: blockdev writev readv 8 blocks ...passed 00:25:53.104 Test: blockdev writev readv 30 x 1block ...passed 00:25:53.104 Test: blockdev writev readv block ...passed 00:25:53.104 Test: blockdev writev readv size > 128k ...passed 00:25:53.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:53.104 Test: blockdev comparev and writev ...[2024-11-20 09:19:31.878563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.104 [2024-11-20 09:19:31.878627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.104 [2024-11-20 09:19:31.878648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.104 [2024-11-20 09:19:31.878659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.104 [2024-11-20 09:19:31.879037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.105 [2024-11-20 09:19:31.879055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.105 [2024-11-20 09:19:31.879072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.105 [2024-11-20 09:19:31.879083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.105 [2024-11-20 09:19:31.879426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.105 [2024-11-20 09:19:31.879443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.105 [2024-11-20 09:19:31.879459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.105 [2024-11-20 09:19:31.879469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.105 [2024-11-20 09:19:31.879824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.105 [2024-11-20 09:19:31.879846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.105 [2024-11-20 09:19:31.879863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:53.105 [2024-11-20 09:19:31.879872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.105 passed 00:25:53.105 Test: blockdev nvme passthru rw ...passed 00:25:53.105 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:19:31.963294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:53.105 [2024-11-20 09:19:31.963347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.105 [2024-11-20 09:19:31.963485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:53.105 [2024-11-20 09:19:31.963501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.105 passed 00:25:53.105 Test: blockdev nvme admin passthru ...[2024-11-20 09:19:31.963623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:53.105 [2024-11-20 09:19:31.963645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.105 [2024-11-20 09:19:31.963781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:53.105 [2024-11-20 09:19:31.963798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.105 passed 00:25:53.105 Test: blockdev copy ...passed 00:25:53.105 00:25:53.105 Run Summary: Type Total Ran Passed Failed Inactive 00:25:53.105 suites 1 1 n/a 0 0 00:25:53.105 tests 23 23 23 0 0 00:25:53.105 asserts 152 152 152 0 n/a 00:25:53.105 00:25:53.105 Elapsed time = 0.865 seconds 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:53.363 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:53.622 rmmod nvme_tcp 00:25:53.622 rmmod nvme_fabrics 00:25:53.622 rmmod nvme_keyring 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 105984 ']' 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 105984 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 105984 ']' 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 105984 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105984 00:25:53.622 killing process with pid 105984 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105984' 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 105984 00:25:53.622 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 105984 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:25:53.880 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # continue 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:54.149 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:25:54.150 00:25:54.150 real 0m3.753s 00:25:54.150 user 0m8.158s 00:25:54.150 sys 0m1.455s 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:54.150 ************************************ 00:25:54.150 END TEST nvmf_bdevio 00:25:54.150 ************************************ 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:54.150 ************************************ 00:25:54.150 START TEST nvmf_target_multipath 00:25:54.150 ************************************ 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:54.150 * Looking for test storage... 00:25:54.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:25:54.150 09:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:54.150 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:54.150 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.151 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:54.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.413 --rc genhtml_branch_coverage=1 00:25:54.413 --rc genhtml_function_coverage=1 00:25:54.413 --rc genhtml_legend=1 00:25:54.413 --rc geninfo_all_blocks=1 00:25:54.413 --rc geninfo_unexecuted_blocks=1 00:25:54.413 00:25:54.413 ' 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:54.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.413 --rc genhtml_branch_coverage=1 00:25:54.413 --rc genhtml_function_coverage=1 00:25:54.413 --rc genhtml_legend=1 00:25:54.413 --rc geninfo_all_blocks=1 00:25:54.413 --rc geninfo_unexecuted_blocks=1 00:25:54.413 00:25:54.413 ' 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:54.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.413 --rc genhtml_branch_coverage=1 00:25:54.413 --rc genhtml_function_coverage=1 00:25:54.413 --rc genhtml_legend=1 00:25:54.413 --rc geninfo_all_blocks=1 00:25:54.413 --rc geninfo_unexecuted_blocks=1 00:25:54.413 00:25:54.413 ' 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:54.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.413 --rc genhtml_branch_coverage=1 00:25:54.413 --rc genhtml_function_coverage=1 00:25:54.413 --rc genhtml_legend=1 00:25:54.413 --rc geninfo_all_blocks=1 00:25:54.413 --rc geninfo_unexecuted_blocks=1 00:25:54.413 00:25:54.413 ' 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:25:54.413 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@280 -- # nvmf_veth_init 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@223 -- # create_target_ns 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@224 -- # create_main_bridge 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@105 -- # delete_main_bridge 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator0 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target0 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0 up 00:25:54.414 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target0_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target0 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:25:54.415 10.0.0.1 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:54.415 10.0.0.2 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator0 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target0_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up initiator1 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:25:54.415 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@151 -- # set_up target1 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1 up 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@152 -- # set_up target1_br 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns target1 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:25:54.416 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772163 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:25:54.675 10.0.0.3 00:25:54.675 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772164 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:25:54.676 10.0.0.4 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up initiator1 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@129 -- # set_up target1_br 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 2 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:54.676 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:54.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:25:54.676 00:25:54.676 --- 10.0.0.1 ping statistics --- 00:25:54.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.676 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:54.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:25:54.677 00:25:54.677 --- 10.0.0.2 ping statistics --- 00:25:54.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.677 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:25:54.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:25:54.677 00:25:54.677 --- 10.0.0.3 ping statistics --- 00:25:54.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.677 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:25:54.677 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:54.677 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.095 ms 00:25:54.677 00:25:54.677 --- 10.0.0.4 ping statistics --- 00:25:54.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.677 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # return 0 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:54.677 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo initiator1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=initiator1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target0 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo target1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=target1 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # nvmfappstart -m 0xF 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:54.678 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # nvmfpid=106271 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@329 -- # waitforlisten 106271 00:25:54.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 106271 ']' 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.679 09:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:54.937 [2024-11-20 09:19:33.640050] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:54.937 [2024-11-20 09:19:33.641138] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:54.937 [2024-11-20 09:19:33.641200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.937 [2024-11-20 09:19:33.790702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:55.195 [2024-11-20 09:19:33.882142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.195 [2024-11-20 09:19:33.882518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.195 [2024-11-20 09:19:33.882748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.195 [2024-11-20 09:19:33.882918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.195 [2024-11-20 09:19:33.883047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.195 [2024-11-20 09:19:33.884669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.195 [2024-11-20 09:19:33.884722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.195 [2024-11-20 09:19:33.884803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.195 [2024-11-20 09:19:33.884795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.195 [2024-11-20 09:19:34.022849] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:55.195 [2024-11-20 09:19:34.023636] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:55.195 [2024-11-20 09:19:34.023299] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:55.195 [2024-11-20 09:19:34.023198] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:55.195 [2024-11-20 09:19:34.024316] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:56.130 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:56.130 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:56.130 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:56.130 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:56.130 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:56.130 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:56.130 09:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:56.130 [2024-11-20 09:19:34.986526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.130 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:56.695 Malloc0 00:25:56.695 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:25:56.953 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:57.211 09:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.469 [2024-11-20 09:19:36.194617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.469 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:25:57.727 [2024-11-20 09:19:36.458584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:25:57.727 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:25:57.727 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:25:57.984 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:25:57.984 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:25:57.984 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:57.984 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:57.984 09:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@66 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@66 -- # subsystem=nvme-subsys0 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # paths=("${paths[@]##*/}") 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@70 -- # (( 2 == 2 )) 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # p0=nvme0c0n1 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # p1=nvme0c1n1 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@75 -- # check_ana_state nvme0c0n1 optimized 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # check_ana_state nvme0c1n1 optimized 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # echo numa 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # fio_pid=106409 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:59.896 09:19:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@84 -- # sleep 1 00:25:59.896 [global] 00:25:59.896 thread=1 00:25:59.896 invalidate=1 00:25:59.896 rw=randrw 00:25:59.896 time_based=1 00:25:59.896 runtime=6 00:25:59.896 ioengine=libaio 00:25:59.896 direct=1 00:25:59.896 bs=4096 00:25:59.896 iodepth=128 00:25:59.896 norandommap=0 00:25:59.896 numjobs=1 00:25:59.896 00:25:59.896 verify_dump=1 00:25:59.896 verify_backlog=512 00:25:59.896 verify_state_save=0 00:25:59.896 do_verify=1 00:25:59.896 verify=crc32c-intel 00:25:59.896 [job0] 00:25:59.896 filename=/dev/nvme0n1 00:25:59.896 Could not set queue depth (nvme0n1) 00:26:00.155 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:00.155 fio-3.35 00:26:00.155 Starting 1 thread 00:26:01.087 09:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:01.345 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@89 -- # check_ana_state nvme0c0n1 inaccessible 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # check_ana_state nvme0c1n1 non-optimized 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:01.602 09:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:02.975 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:02.975 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:02.975 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:02.975 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.975 09:19:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 non-optimized 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 inaccessible 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:03.233 09:19:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:04.615 09:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:04.615 09:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:04.615 09:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:04.615 09:19:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # wait 106409 00:26:06.525 00:26:06.525 job0: (groupid=0, jobs=1): err= 0: pid=106436: Wed Nov 20 09:19:45 2024 00:26:06.525 read: IOPS=10.2k, BW=40.0MiB/s (42.0MB/s)(240MiB/6007msec) 00:26:06.525 slat (usec): min=4, max=13246, avg=55.68, stdev=292.56 00:26:06.525 clat (usec): min=2041, max=38003, avg=8331.93, stdev=2559.87 00:26:06.525 lat (usec): min=2060, max=38575, avg=8387.60, stdev=2582.20 00:26:06.525 clat percentiles (usec): 00:26:06.525 | 1.00th=[ 4752], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7177], 00:26:06.525 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8160], 00:26:06.525 | 70.00th=[ 8455], 80.00th=[ 8979], 90.00th=[10028], 95.00th=[11207], 00:26:06.525 | 99.00th=[23725], 99.50th=[26084], 99.90th=[33817], 99.95th=[35390], 00:26:06.525 | 99.99th=[37487] 00:26:06.525 bw ( KiB/s): min= 7256, max=27568, per=52.55%, avg=21542.67, stdev=6373.79, samples=12 00:26:06.525 iops : min= 1814, max= 6892, avg=5385.67, stdev=1593.45, samples=12 00:26:06.525 write: IOPS=6088, BW=23.8MiB/s (24.9MB/s)(127MiB/5325msec); 0 zone resets 00:26:06.525 slat (usec): min=7, max=4385, avg=68.26, stdev=169.80 00:26:06.525 clat (usec): min=2444, max=35382, avg=7679.25, stdev=2487.87 00:26:06.525 lat (usec): min=2477, max=35419, avg=7747.51, stdev=2505.18 00:26:06.525 clat percentiles (usec): 00:26:06.525 | 1.00th=[ 4178], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 6783], 00:26:06.525 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:26:06.525 | 70.00th=[ 7767], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9765], 00:26:06.525 | 99.00th=[22676], 99.50th=[24511], 99.90th=[32113], 99.95th=[33817], 00:26:06.525 | 99.99th=[34341] 00:26:06.525 bw ( KiB/s): min= 7792, max=27440, per=88.59%, avg=21574.00, stdev=6263.23, samples=12 00:26:06.525 iops : min= 1948, max= 6860, avg=5393.50, stdev=1565.81, samples=12 00:26:06.525 lat (msec) : 4=0.51%, 10=90.96%, 20=7.13%, 50=1.39% 00:26:06.525 cpu : usr=5.61%, sys=22.34%, ctx=7265, majf=0, minf=102 00:26:06.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:06.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.525 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:06.525 issued rwts: total=61566,32420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.525 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:06.525 00:26:06.525 Run status group 0 (all jobs): 00:26:06.525 READ: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=240MiB (252MB), run=6007-6007msec 00:26:06.525 WRITE: bw=23.8MiB/s (24.9MB/s), 23.8MiB/s-23.8MiB/s (24.9MB/s-24.9MB/s), io=127MiB (133MB), run=5325-5325msec 00:26:06.525 00:26:06.525 Disk stats (read/write): 00:26:06.525 nvme0n1: ios=60884/31744, merge=0/0, ticks=475710/232533, in_queue=708243, util=98.67% 00:26:06.525 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:06.525 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:26:06.783 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@103 -- # check_ana_state nvme0c0n1 optimized 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # check_ana_state nvme0c1n1 optimized 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:26:06.784 09:19:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:08.156 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:08.156 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:08.156 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:26:08.156 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # echo round-robin 00:26:08.156 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # fio_pid=106557 00:26:08.156 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@112 -- # sleep 1 00:26:08.156 09:19:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:26:08.156 [global] 00:26:08.156 thread=1 00:26:08.156 invalidate=1 00:26:08.156 rw=randrw 00:26:08.156 time_based=1 00:26:08.156 runtime=6 00:26:08.156 ioengine=libaio 00:26:08.156 direct=1 00:26:08.156 bs=4096 00:26:08.156 iodepth=128 00:26:08.156 norandommap=0 00:26:08.156 numjobs=1 00:26:08.156 00:26:08.156 verify_dump=1 00:26:08.156 verify_backlog=512 00:26:08.156 verify_state_save=0 00:26:08.156 do_verify=1 00:26:08.156 verify=crc32c-intel 00:26:08.156 [job0] 00:26:08.156 filename=/dev/nvme0n1 00:26:08.156 Could not set queue depth (nvme0n1) 00:26:08.156 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:08.156 fio-3.35 00:26:08.156 Starting 1 thread 00:26:09.091 09:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@114 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:09.091 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@117 -- # check_ana_state nvme0c0n1 inaccessible 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # check_ana_state nvme0c1n1 non-optimized 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:09.658 09:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:10.593 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:10.593 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:10.593 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:10.593 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:10.851 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:26:11.110 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 non-optimized 00:26:11.110 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:26:11.110 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:11.110 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:26:11.110 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:26:11.110 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:26:11.111 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 inaccessible 00:26:11.111 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:26:11.111 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:26:11.111 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:26:11.111 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:11.111 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:11.111 09:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:26:12.112 09:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:26:12.112 09:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:26:12.112 09:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:26:12.112 09:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # wait 106557 00:26:14.644 00:26:14.644 job0: (groupid=0, jobs=1): err= 0: pid=106578: Wed Nov 20 09:19:52 2024 00:26:14.644 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(278MiB/6006msec) 00:26:14.644 slat (usec): min=2, max=6329, avg=42.94, stdev=224.86 00:26:14.644 clat (usec): min=271, max=48407, avg=7228.44, stdev=1969.58 00:26:14.644 lat (usec): min=305, max=48415, avg=7271.38, stdev=1983.77 00:26:14.644 clat percentiles (usec): 00:26:14.644 | 1.00th=[ 1467], 5.00th=[ 3654], 10.00th=[ 4817], 20.00th=[ 5800], 00:26:14.644 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7439], 60.00th=[ 7767], 00:26:14.644 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 9241], 95.00th=[10159], 00:26:14.644 | 99.00th=[11994], 99.50th=[12649], 99.90th=[14746], 99.95th=[15270], 00:26:14.644 | 99.99th=[16712] 00:26:14.644 bw ( KiB/s): min= 8648, max=37408, per=53.96%, avg=25623.27, stdev=9319.16, samples=11 00:26:14.644 iops : min= 2162, max= 9352, avg=6405.82, stdev=2329.79, samples=11 00:26:14.644 write: IOPS=7193, BW=28.1MiB/s (29.5MB/s)(150MiB/5338msec); 0 zone resets 00:26:14.644 slat (usec): min=12, max=3480, avg=54.53, stdev=120.83 00:26:14.644 clat (usec): min=213, max=15028, avg=6427.03, stdev=1861.44 00:26:14.644 lat (usec): min=253, max=15069, avg=6481.56, stdev=1870.98 00:26:14.644 clat percentiles (usec): 00:26:14.644 | 1.00th=[ 1090], 5.00th=[ 2900], 10.00th=[ 3851], 20.00th=[ 4752], 00:26:14.644 | 30.00th=[ 5866], 40.00th=[ 6652], 50.00th=[ 6980], 60.00th=[ 7177], 00:26:14.644 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8029], 95.00th=[ 8586], 00:26:14.644 | 99.00th=[10945], 99.50th=[11600], 99.90th=[13304], 99.95th=[13829], 00:26:14.644 | 99.99th=[14353] 00:26:14.644 bw ( KiB/s): min= 9176, max=36448, per=89.02%, avg=25616.00, stdev=9044.48, samples=11 00:26:14.644 iops : min= 2294, max= 9112, avg=6404.00, stdev=2261.12, samples=11 00:26:14.644 lat (usec) : 250=0.01%, 500=0.03%, 750=0.14%, 1000=0.31% 00:26:14.644 lat (msec) : 2=1.61%, 4=5.69%, 10=87.91%, 20=4.30%, 50=0.01% 00:26:14.644 cpu : usr=5.91%, sys=25.61%, ctx=9903, majf=0, minf=102 00:26:14.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:26:14.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:14.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:14.644 issued rwts: total=71295,38400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:14.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:14.644 00:26:14.644 Run status group 0 (all jobs): 00:26:14.644 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=278MiB (292MB), run=6006-6006msec 00:26:14.644 WRITE: bw=28.1MiB/s (29.5MB/s), 28.1MiB/s-28.1MiB/s (29.5MB/s-29.5MB/s), io=150MiB (157MB), run=5338-5338msec 00:26:14.644 00:26:14.644 Disk stats (read/write): 00:26:14.644 nvme0n1: ios=70320/37781, merge=0/0, ticks=470907/224395, in_queue=695302, util=98.56% 00:26:14.644 09:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@128 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:14.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@133 -- # rm -f ./local-job0-0-verify.state 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # rm -f ./local-job1-1-verify.state 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@136 -- # trap - SIGINT SIGTERM EXIT 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@138 -- # nvmftestfini 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:14.644 rmmod nvme_tcp 00:26:14.644 rmmod nvme_fabrics 00:26:14.644 rmmod nvme_keyring 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:26:14.644 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:26:14.645 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n 106271 ']' 00:26:14.645 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@337 -- # killprocess 106271 00:26:14.645 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 106271 ']' 00:26:14.645 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 106271 00:26:14.645 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:26:14.645 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.645 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106271 00:26:14.903 killing process with pid 106271 00:26:14.903 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.903 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.903 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106271' 00:26:14.903 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 106271 00:26:14.903 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 106271 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:15.162 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:15.163 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:15.163 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:15.163 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:15.163 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:15.163 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:15.163 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:15.163 09:19:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # continue 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:15.163 00:26:15.163 real 0m21.130s 00:26:15.163 user 1m10.849s 00:26:15.163 sys 0m9.711s 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:15.163 ************************************ 00:26:15.163 END TEST nvmf_target_multipath 00:26:15.163 ************************************ 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:15.163 ************************************ 00:26:15.163 START TEST nvmf_zcopy 00:26:15.163 ************************************ 00:26:15.163 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:26:15.423 * Looking for test storage... 00:26:15.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.423 --rc genhtml_branch_coverage=1 00:26:15.423 --rc genhtml_function_coverage=1 00:26:15.423 --rc genhtml_legend=1 00:26:15.423 --rc geninfo_all_blocks=1 00:26:15.423 --rc geninfo_unexecuted_blocks=1 00:26:15.423 00:26:15.423 ' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.423 --rc genhtml_branch_coverage=1 00:26:15.423 --rc genhtml_function_coverage=1 00:26:15.423 --rc genhtml_legend=1 00:26:15.423 --rc geninfo_all_blocks=1 00:26:15.423 --rc geninfo_unexecuted_blocks=1 00:26:15.423 00:26:15.423 ' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.423 --rc genhtml_branch_coverage=1 00:26:15.423 --rc genhtml_function_coverage=1 00:26:15.423 --rc genhtml_legend=1 00:26:15.423 --rc geninfo_all_blocks=1 00:26:15.423 --rc geninfo_unexecuted_blocks=1 00:26:15.423 00:26:15.423 ' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:15.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.423 --rc genhtml_branch_coverage=1 00:26:15.423 --rc genhtml_function_coverage=1 00:26:15.423 --rc genhtml_legend=1 00:26:15.423 --rc geninfo_all_blocks=1 00:26:15.423 --rc geninfo_unexecuted_blocks=1 00:26:15.423 00:26:15.423 ' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.423 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@223 -- # create_target_ns 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:15.424 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target0 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:15.684 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:15.685 10.0.0.1 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:15.685 10.0.0.2 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:15.685 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@151 -- # set_up target1 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772163 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:15.686 10.0.0.3 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772164 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:15.686 10.0.0.4 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:15.686 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:15.945 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:15.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:26:15.946 00:26:15.946 --- 10.0.0.1 ping statistics --- 00:26:15.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.946 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:15.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:26:15.946 00:26:15.946 --- 10.0.0.2 ping statistics --- 00:26:15.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.946 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:15.946 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:15.946 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:26:15.946 00:26:15.946 --- 10.0.0.3 ping statistics --- 00:26:15.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.946 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:15.946 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:15.946 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:26:15.946 00:26:15.946 --- 10.0.0.4 ping statistics --- 00:26:15.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.946 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:15.946 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # return 0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo initiator1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target0 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo target1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=target1 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=106910 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 106910 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 106910 ']' 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.947 09:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:15.947 [2024-11-20 09:19:54.842815] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:15.947 [2024-11-20 09:19:54.843891] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:15.947 [2024-11-20 09:19:54.843977] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:16.206 [2024-11-20 09:19:54.989290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.206 [2024-11-20 09:19:55.055878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:16.206 [2024-11-20 09:19:55.055942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:16.206 [2024-11-20 09:19:55.055954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:16.206 [2024-11-20 09:19:55.055962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:16.206 [2024-11-20 09:19:55.055969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:16.206 [2024-11-20 09:19:55.056345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.464 [2024-11-20 09:19:55.152138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:16.464 [2024-11-20 09:19:55.152464] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:16.464 [2024-11-20 09:19:55.229180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:16.464 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:16.465 [2024-11-20 09:19:55.257267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:16.465 malloc0 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:26:16.465 { 00:26:16.465 "params": { 00:26:16.465 "name": "Nvme$subsystem", 00:26:16.465 "trtype": "$TEST_TRANSPORT", 00:26:16.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:16.465 "adrfam": "ipv4", 00:26:16.465 "trsvcid": "$NVMF_PORT", 00:26:16.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:16.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:16.465 "hdgst": ${hdgst:-false}, 00:26:16.465 "ddgst": ${ddgst:-false} 00:26:16.465 }, 00:26:16.465 "method": "bdev_nvme_attach_controller" 00:26:16.465 } 00:26:16.465 EOF 00:26:16.465 )") 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:26:16.465 09:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:26:16.465 "params": { 00:26:16.465 "name": "Nvme1", 00:26:16.465 "trtype": "tcp", 00:26:16.465 "traddr": "10.0.0.2", 00:26:16.465 "adrfam": "ipv4", 00:26:16.465 "trsvcid": "4420", 00:26:16.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:16.465 "hdgst": false, 00:26:16.465 "ddgst": false 00:26:16.465 }, 00:26:16.465 "method": "bdev_nvme_attach_controller" 00:26:16.465 }' 00:26:16.465 [2024-11-20 09:19:55.370102] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:16.465 [2024-11-20 09:19:55.370195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106948 ] 00:26:16.723 [2024-11-20 09:19:55.522327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.723 [2024-11-20 09:19:55.595159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.983 Running I/O for 10 seconds... 00:26:19.298 5597.00 IOPS, 43.73 MiB/s [2024-11-20T09:19:59.150Z] 5648.50 IOPS, 44.13 MiB/s [2024-11-20T09:20:00.089Z] 5659.33 IOPS, 44.21 MiB/s [2024-11-20T09:20:01.023Z] 5678.50 IOPS, 44.36 MiB/s [2024-11-20T09:20:01.956Z] 5687.80 IOPS, 44.44 MiB/s [2024-11-20T09:20:02.890Z] 5691.17 IOPS, 44.46 MiB/s [2024-11-20T09:20:03.826Z] 5697.14 IOPS, 44.51 MiB/s [2024-11-20T09:20:05.285Z] 5698.25 IOPS, 44.52 MiB/s [2024-11-20T09:20:05.853Z] 5687.33 IOPS, 44.43 MiB/s [2024-11-20T09:20:05.853Z] 5692.40 IOPS, 44.47 MiB/s 00:26:26.934 Latency(us) 00:26:26.934 [2024-11-20T09:20:05.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.934 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:26:26.934 Verification LBA range: start 0x0 length 0x1000 00:26:26.934 Nvme1n1 : 10.01 5696.60 44.50 0.00 0.00 22398.98 3023.59 34317.03 00:26:26.934 [2024-11-20T09:20:05.853Z] =================================================================================================================== 00:26:26.934 [2024-11-20T09:20:05.853Z] Total : 5696.60 44.50 0.00 0.00 22398.98 3023.59 34317.03 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=107056 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:26:27.193 { 00:26:27.193 "params": { 00:26:27.193 "name": "Nvme$subsystem", 00:26:27.193 "trtype": "$TEST_TRANSPORT", 00:26:27.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.193 "adrfam": "ipv4", 00:26:27.193 "trsvcid": "$NVMF_PORT", 00:26:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.193 "hdgst": ${hdgst:-false}, 00:26:27.193 "ddgst": ${ddgst:-false} 00:26:27.193 }, 00:26:27.193 "method": "bdev_nvme_attach_controller" 00:26:27.193 } 00:26:27.193 EOF 00:26:27.193 )") 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:26:27.193 [2024-11-20 09:20:06.008929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.193 [2024-11-20 09:20:06.009158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:26:27.193 09:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:26:27.193 "params": { 00:26:27.193 "name": "Nvme1", 00:26:27.193 "trtype": "tcp", 00:26:27.193 "traddr": "10.0.0.2", 00:26:27.193 "adrfam": "ipv4", 00:26:27.193 "trsvcid": "4420", 00:26:27.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.193 "hdgst": false, 00:26:27.193 "ddgst": false 00:26:27.193 }, 00:26:27.193 "method": "bdev_nvme_attach_controller" 00:26:27.193 }' 00:26:27.193 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.193 [2024-11-20 09:20:06.020886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.193 [2024-11-20 09:20:06.020930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.193 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.193 [2024-11-20 09:20:06.032880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.193 [2024-11-20 09:20:06.032925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.193 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.193 [2024-11-20 09:20:06.044886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.193 [2024-11-20 09:20:06.044933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.193 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.193 [2024-11-20 09:20:06.056879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.194 [2024-11-20 09:20:06.056924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.194 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.194 [2024-11-20 09:20:06.064074] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:27.194 [2024-11-20 09:20:06.064328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107056 ] 00:26:27.194 [2024-11-20 09:20:06.068879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.194 [2024-11-20 09:20:06.068918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.194 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.194 [2024-11-20 09:20:06.080876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.194 [2024-11-20 09:20:06.080926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.194 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.194 [2024-11-20 09:20:06.092874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.194 [2024-11-20 09:20:06.092915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.194 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.194 [2024-11-20 09:20:06.104871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.194 [2024-11-20 09:20:06.104914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.194 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.452 [2024-11-20 09:20:06.116867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.452 [2024-11-20 09:20:06.116909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.452 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.452 [2024-11-20 09:20:06.128865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.128904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.140858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.140901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.152863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.152900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.164854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.164889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.176927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.176994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.188879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.188918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.200860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.200894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.209126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.453 [2024-11-20 09:20:06.212885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.212919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.224892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.224946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.236880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.236928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.248869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.248912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.260873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.260911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.272886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.272925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.284886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.284925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 [2024-11-20 09:20:06.285694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.296871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.296906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.308887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.308931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.320892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.320935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.332897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.332940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.344887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.344932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.356877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.356916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.453 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.453 [2024-11-20 09:20:06.368889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.453 [2024-11-20 09:20:06.368934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.380893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.380935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.392881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.392922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.404869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.404912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.416866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.416907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.428866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.428903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.440863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.440898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.452892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.452933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.464871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.464912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 Running I/O for 5 seconds... 00:26:27.712 [2024-11-20 09:20:06.485635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.485679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.502698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.502739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.520884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.712 [2024-11-20 09:20:06.520927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.712 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.712 [2024-11-20 09:20:06.531092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.713 [2024-11-20 09:20:06.531129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.713 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.713 [2024-11-20 09:20:06.547258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.713 [2024-11-20 09:20:06.547304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.713 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.713 [2024-11-20 09:20:06.563543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.713 [2024-11-20 09:20:06.563583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.713 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.713 [2024-11-20 09:20:06.577506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.713 [2024-11-20 09:20:06.577543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.713 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.713 [2024-11-20 09:20:06.597947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.713 [2024-11-20 09:20:06.598030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.713 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.713 [2024-11-20 09:20:06.614934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.713 [2024-11-20 09:20:06.614979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.713 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.631661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.631711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.642204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.642242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.658769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.658828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.675326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.675383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.690474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.690526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.709067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.709104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.719326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.719382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.734090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.734138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.753682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.753717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.773421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.773460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.791989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.792025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.801842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.801876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.818859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.818897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.834475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.834516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.851790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.851845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.862167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.862205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:27.972 [2024-11-20 09:20:06.877474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:27.972 [2024-11-20 09:20:06.877512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:27.972 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:06.897904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:06.897943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:06.916664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:06.916730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:06.926280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:06.926316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:06.942738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:06.942792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:06.960807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:06.960861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:06.970559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:06.970594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:06.985608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:06.985648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:06 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:07.004970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:07.005019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:07.015450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.231 [2024-11-20 09:20:07.015487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.231 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.231 [2024-11-20 09:20:07.030625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.232 [2024-11-20 09:20:07.030668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.232 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.232 [2024-11-20 09:20:07.049118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.232 [2024-11-20 09:20:07.049160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.232 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.232 [2024-11-20 09:20:07.059478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.232 [2024-11-20 09:20:07.059518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.232 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.232 [2024-11-20 09:20:07.074635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.232 [2024-11-20 09:20:07.074845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.232 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.232 [2024-11-20 09:20:07.088426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.232 [2024-11-20 09:20:07.088621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.232 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.232 [2024-11-20 09:20:07.110675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.232 [2024-11-20 09:20:07.110749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.232 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.232 [2024-11-20 09:20:07.132476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.232 [2024-11-20 09:20:07.132530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.232 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.153319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.153578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.171893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.171946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.181947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.181991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.198852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.198901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.215010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.215060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.233183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.233237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.242650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.242850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.257946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.490 [2024-11-20 09:20:07.257993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.490 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.490 [2024-11-20 09:20:07.276144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.276193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.285993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.286048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.302261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.302309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.319399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.319449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.334312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.334359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.352574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.352622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.373814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.373865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.388705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.388769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.491 [2024-11-20 09:20:07.398805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.491 [2024-11-20 09:20:07.398848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.491 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.413768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.413814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.432647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.432713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.442631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.442675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.458121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.458165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 11123.00 IOPS, 86.90 MiB/s [2024-11-20T09:20:07.669Z] [2024-11-20 09:20:07.477250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.477296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.497087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.497134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.507638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.507686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.522803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.522850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.538538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.538587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.557102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.557152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.566902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.566944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.583430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.583478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.594310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.594355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.610274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.610322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.629562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.629618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.648734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.648789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:28.750 [2024-11-20 09:20:07.658888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:28.750 [2024-11-20 09:20:07.658929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:28.750 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.673007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.673051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.683521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.683564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.697991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.698047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.717712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.717781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.737229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.737281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.754584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.754634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.772906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.772950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.783130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.783171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.798250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.798294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.816799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.816858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.827224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.827264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.842620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.842660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.858882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.858922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.874692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.874732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.891319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.891357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.907504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.907544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.922604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.922644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.941187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.941226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.043 [2024-11-20 09:20:07.951449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.043 [2024-11-20 09:20:07.951503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.043 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:07.966439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:07.966478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:07.985187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:07.985227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.001525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.001564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.021493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.021533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.038431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.038472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.057009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.057071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.066934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.066972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.083306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.083357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.093737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.093911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.110499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.110550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.129109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.129318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.140149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.140198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.155114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.155164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.171428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.171462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.186631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.186668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.205199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.205237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.303 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.303 [2024-11-20 09:20:08.215678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.303 [2024-11-20 09:20:08.215723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.230820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.230852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.246243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.246277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.265935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.265968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.283113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.283162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.299524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.299573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.314583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.314622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.331494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.331543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.346392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.346425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.364599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.364633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.385588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.385636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.405917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.405964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.422694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.422741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.441530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.441579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 [2024-11-20 09:20:08.459614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.459661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.562 11161.50 IOPS, 87.20 MiB/s [2024-11-20T09:20:08.481Z] [2024-11-20 09:20:08.474323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.562 [2024-11-20 09:20:08.474357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.562 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.493004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.493034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.503742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.503791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.519710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.519759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.531591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.531643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.547089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.547122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.563223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.563289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.578452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.578486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.597138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.597171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.607344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.607378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.621799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.621844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.821 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.821 [2024-11-20 09:20:08.640440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.821 [2024-11-20 09:20:08.640488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.822 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.822 [2024-11-20 09:20:08.650429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.822 [2024-11-20 09:20:08.650461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.822 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.822 [2024-11-20 09:20:08.665465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.822 [2024-11-20 09:20:08.665514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.822 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.822 [2024-11-20 09:20:08.684596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.822 [2024-11-20 09:20:08.684630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.822 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.822 [2024-11-20 09:20:08.706199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.822 [2024-11-20 09:20:08.706246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.822 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.822 [2024-11-20 09:20:08.721044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.822 [2024-11-20 09:20:08.721078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.822 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:29.822 [2024-11-20 09:20:08.731078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:29.822 [2024-11-20 09:20:08.731110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:29.822 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.080 [2024-11-20 09:20:08.745650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.080 [2024-11-20 09:20:08.745684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.080 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.080 [2024-11-20 09:20:08.764283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.080 [2024-11-20 09:20:08.764317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.080 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.080 [2024-11-20 09:20:08.774465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.080 [2024-11-20 09:20:08.774499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.080 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.080 [2024-11-20 09:20:08.790450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.080 [2024-11-20 09:20:08.790487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.808986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.809024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.819566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.819603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.835426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.835468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.845828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.845863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.862221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.862262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.880529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.880570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.891411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.891452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.906103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.906150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.922723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.922781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.941319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.941360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.960728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.960779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.971856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.971893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.081 [2024-11-20 09:20:08.985583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.081 [2024-11-20 09:20:08.985623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.081 2024/11/20 09:20:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.004561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.004603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.024372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.024416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.045506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.045551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.063739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.063794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.074040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.074082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.088311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.088349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.098268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.098310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.112877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.112930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.122761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.122818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.137844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.137883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.156131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.156321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.178028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.178068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.192727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.192779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.202448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.202486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.217200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.217239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.236947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.236986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.341 [2024-11-20 09:20:09.247881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.341 [2024-11-20 09:20:09.247919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.341 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.260985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.261023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.599 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.271431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.271595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.599 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.286921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.287075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.599 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.303016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.303170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.599 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.319138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.319293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.599 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.334666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.334846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.599 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.351154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.351309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.599 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.599 [2024-11-20 09:20:09.367186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.599 [2024-11-20 09:20:09.367340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 [2024-11-20 09:20:09.383291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.383445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 [2024-11-20 09:20:09.399376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.399555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 [2024-11-20 09:20:09.413269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.413309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 [2024-11-20 09:20:09.434160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.434330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 [2024-11-20 09:20:09.449157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.449314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 [2024-11-20 09:20:09.469802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.469848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 11159.67 IOPS, 87.18 MiB/s [2024-11-20T09:20:09.519Z] [2024-11-20 09:20:09.487421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.487469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.600 [2024-11-20 09:20:09.502590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.600 [2024-11-20 09:20:09.502635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.600 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.524920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.524969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.536112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.536290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.547295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.547459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.558945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.558988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.573841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.573887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.592854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.592900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.602736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.602793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.619118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.619161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.634191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.634233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.653214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.653259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.672805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.672849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.682754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.682805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.697011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.697057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.706901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.706942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.722628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.722672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.738788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.738831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.757048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.757093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:30.858 [2024-11-20 09:20:09.767032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:30.858 [2024-11-20 09:20:09.767073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:30.858 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.781962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.782005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.800501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.800556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.810997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.811037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.826307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.826353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.845148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.845196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.855679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.855724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.870962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.871006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.887051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.887094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.117 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.117 [2024-11-20 09:20:09.902574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.117 [2024-11-20 09:20:09.902616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:09.920666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:09.920714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:09.931077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:09.931117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:09.945219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:09.945260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:09.964882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:09.964924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:09.975109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:09.975147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:09.989729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:09.989783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:10.009630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:10.009674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.118 [2024-11-20 09:20:10.027593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.118 [2024-11-20 09:20:10.027636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.118 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.037770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.037809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.052880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.052926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.063032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.063075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.077681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.077723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.097262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.097310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.116013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.116065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.126631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.126678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.141510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.141558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.160483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.160536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.180893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.180949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.191576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.191625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.205806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.205862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.225213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.225265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.245339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.245397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.264511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.264567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.377 [2024-11-20 09:20:10.284661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.377 [2024-11-20 09:20:10.284713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.377 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.294938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.294990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.312046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.312100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.323491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.323541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.335161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.335203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.348521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.348564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.369352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.369402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.389891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.389941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.407975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.408019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.417784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.417819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.432809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.432850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.443021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.443064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.458344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.458388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 11190.75 IOPS, 87.43 MiB/s [2024-11-20T09:20:10.556Z] [2024-11-20 09:20:10.476580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.476625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.637 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.637 [2024-11-20 09:20:10.497331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.637 [2024-11-20 09:20:10.497383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.638 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.638 [2024-11-20 09:20:10.516137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.638 [2024-11-20 09:20:10.516329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.638 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.638 [2024-11-20 09:20:10.537610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.638 [2024-11-20 09:20:10.537679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.638 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.555076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.555277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.569440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.569493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.589913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.589968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.608290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.608348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.617933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.617974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.633098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.633298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.644561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.644790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.655816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.655858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.896 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.896 [2024-11-20 09:20:10.669846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.896 [2024-11-20 09:20:10.669890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.897 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.897 [2024-11-20 09:20:10.689856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.897 [2024-11-20 09:20:10.689924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.897 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.897 [2024-11-20 09:20:10.709486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.897 [2024-11-20 09:20:10.709556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.897 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.897 [2024-11-20 09:20:10.726714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.897 [2024-11-20 09:20:10.726782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.897 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.897 [2024-11-20 09:20:10.743381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.897 [2024-11-20 09:20:10.743595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.897 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.897 [2024-11-20 09:20:10.766077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.897 [2024-11-20 09:20:10.766343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.897 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.897 [2024-11-20 09:20:10.788153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.897 [2024-11-20 09:20:10.788204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:31.897 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:31.897 [2024-11-20 09:20:10.809454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:31.897 [2024-11-20 09:20:10.809653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.829139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.829186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.839842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.839893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.855326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.855559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.878065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.878334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.895116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.895164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.910637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.910684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.926869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.926914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.943201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.943244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.958275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.958320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.156 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.156 [2024-11-20 09:20:10.976365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.156 [2024-11-20 09:20:10.976413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.157 2024/11/20 09:20:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.157 [2024-11-20 09:20:10.997409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.157 [2024-11-20 09:20:10.997449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.157 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.157 [2024-11-20 09:20:11.017017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.157 [2024-11-20 09:20:11.017059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.157 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.157 [2024-11-20 09:20:11.028095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.157 [2024-11-20 09:20:11.028132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.157 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.157 [2024-11-20 09:20:11.039221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.157 [2024-11-20 09:20:11.039257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.157 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.157 [2024-11-20 09:20:11.055077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.157 [2024-11-20 09:20:11.055114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.157 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.157 [2024-11-20 09:20:11.071107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.157 [2024-11-20 09:20:11.071144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.415 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.415 [2024-11-20 09:20:11.087321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.415 [2024-11-20 09:20:11.087361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.415 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.415 [2024-11-20 09:20:11.102981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.415 [2024-11-20 09:20:11.103019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.415 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.415 [2024-11-20 09:20:11.119399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.415 [2024-11-20 09:20:11.119440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.415 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.415 [2024-11-20 09:20:11.134980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.415 [2024-11-20 09:20:11.135022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.415 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.415 [2024-11-20 09:20:11.151318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.415 [2024-11-20 09:20:11.151359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.415 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.167296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.167336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.183022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.183062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.199005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.199049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.217040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.217082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.226789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.226825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.242862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.242900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.257968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.258018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.277102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.277147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.287182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.287218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.306866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.306919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.416 [2024-11-20 09:20:11.322916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.416 [2024-11-20 09:20:11.322962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.416 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.725 [2024-11-20 09:20:11.339654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.725 [2024-11-20 09:20:11.339710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.725 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.725 [2024-11-20 09:20:11.362306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.725 [2024-11-20 09:20:11.362358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.725 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.725 [2024-11-20 09:20:11.378887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.725 [2024-11-20 09:20:11.378933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.725 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.725 [2024-11-20 09:20:11.395286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.725 [2024-11-20 09:20:11.395329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.411008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.411050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.426898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.426944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.442996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.443037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.458444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.458501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 11197.40 IOPS, 87.48 MiB/s [2024-11-20T09:20:11.645Z] [2024-11-20 09:20:11.476417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.476453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 00:26:32.726 Latency(us) 00:26:32.726 [2024-11-20T09:20:11.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.726 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:26:32.726 Nvme1n1 : 5.01 11199.78 87.50 0.00 0.00 11414.05 3127.85 20137.43 00:26:32.726 [2024-11-20T09:20:11.645Z] =================================================================================================================== 00:26:32.726 [2024-11-20T09:20:11.645Z] Total : 11199.78 87.50 0.00 0.00 11414.05 3127.85 20137.43 00:26:32.726 [2024-11-20 09:20:11.484963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.485129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.496859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.496902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.508876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.508921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.520879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.520926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.532873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.532919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.544880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.544922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.556878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.556926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.568877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.568922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.580879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.580929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.592889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.592936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.604879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.604923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.616867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.616912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.628870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.628912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.726 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.726 [2024-11-20 09:20:11.640878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.726 [2024-11-20 09:20:11.640924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.985 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.985 [2024-11-20 09:20:11.652873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.985 [2024-11-20 09:20:11.652919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.985 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.985 [2024-11-20 09:20:11.664866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.985 [2024-11-20 09:20:11.664909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.985 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.986 [2024-11-20 09:20:11.676861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:26:32.986 [2024-11-20 09:20:11.676901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:32.986 2024/11/20 09:20:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:32.986 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (107056) - No such process 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 107056 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:32.986 delay0 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.986 09:20:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:26:32.986 [2024-11-20 09:20:11.887414] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:41.105 Initializing NVMe Controllers 00:26:41.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.105 Initialization complete. Launching workers. 00:26:41.105 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 245, failed: 21028 00:26:41.105 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21143, failed to submit 130 00:26:41.105 success 21062, unsuccessful 81, failed 0 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:41.105 09:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:41.105 rmmod nvme_tcp 00:26:41.105 rmmod nvme_fabrics 00:26:41.105 rmmod nvme_keyring 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 106910 ']' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 106910 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 106910 ']' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 106910 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106910 00:26:41.105 killing process with pid 106910 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106910' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 106910 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 106910 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # continue 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:26:41.105 00:26:41.105 real 0m25.374s 00:26:41.105 user 0m38.907s 00:26:41.105 sys 0m8.573s 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.105 ************************************ 00:26:41.105 END TEST nvmf_zcopy 00:26:41.105 ************************************ 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # trap - SIGINT SIGTERM EXIT 00:26:41.105 ************************************ 00:26:41.105 END TEST nvmf_target_core_interrupt_mode 00:26:41.105 ************************************ 00:26:41.105 00:26:41.105 real 3m37.807s 00:26:41.105 user 9m45.696s 00:26:41.105 sys 1m23.961s 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:41.105 09:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:41.105 09:20:19 nvmf_tcp -- nvmf/nvmf.sh@17 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:41.105 09:20:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:41.105 09:20:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:41.105 09:20:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.105 ************************************ 00:26:41.105 START TEST nvmf_interrupt 00:26:41.105 ************************************ 00:26:41.105 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:41.105 * Looking for test storage... 00:26:41.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.106 --rc genhtml_branch_coverage=1 00:26:41.106 --rc genhtml_function_coverage=1 00:26:41.106 --rc genhtml_legend=1 00:26:41.106 --rc geninfo_all_blocks=1 00:26:41.106 --rc geninfo_unexecuted_blocks=1 00:26:41.106 00:26:41.106 ' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.106 --rc genhtml_branch_coverage=1 00:26:41.106 --rc genhtml_function_coverage=1 00:26:41.106 --rc genhtml_legend=1 00:26:41.106 --rc geninfo_all_blocks=1 00:26:41.106 --rc geninfo_unexecuted_blocks=1 00:26:41.106 00:26:41.106 ' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.106 --rc genhtml_branch_coverage=1 00:26:41.106 --rc genhtml_function_coverage=1 00:26:41.106 --rc genhtml_legend=1 00:26:41.106 --rc geninfo_all_blocks=1 00:26:41.106 --rc geninfo_unexecuted_blocks=1 00:26:41.106 00:26:41.106 ' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:41.106 --rc genhtml_branch_coverage=1 00:26:41.106 --rc genhtml_function_coverage=1 00:26:41.106 --rc genhtml_legend=1 00:26:41.106 --rc geninfo_all_blocks=1 00:26:41.106 --rc geninfo_unexecuted_blocks=1 00:26:41.106 00:26:41.106 ' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:41.106 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@280 -- # nvmf_veth_init 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@223 -- # create_target_ns 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # create_main_bridge 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@105 -- # delete_main_bridge 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # return 0 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up initiator0 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up target0 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target0 up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up target0_br 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns target0 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:41.107 10.0.0.1 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:26:41.107 10.0.0.2 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up initiator0 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:41.107 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up target0_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up initiator1 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@151 -- # set_up target1 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target1 up 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # set_up target1_br 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns target1 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772163 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:26:41.108 09:20:19 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:26:41.108 10.0.0.3 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772164 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:26:41.108 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:26:41.368 10.0.0.4 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up initiator1 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@129 -- # set_up target1_br 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 2 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator0 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:41.368 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:41.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:26:41.369 00:26:41.369 --- 10.0.0.1 ping statistics --- 00:26:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.369 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target0 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target0 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:41.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:26:41.369 00:26:41.369 --- 10.0.0.2 ping statistics --- 00:26:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.369 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:26:41.369 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.369 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:26:41.369 00:26:41.369 --- 10.0.0.3 ping statistics --- 00:26:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.369 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:26:41.369 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:41.369 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.084 ms 00:26:41.369 00:26:41.369 --- 10.0.0.4 ping statistics --- 00:26:41.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.369 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # return 0 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:41.369 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo initiator1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=initiator1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target0 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo target1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=target1 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=107449 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 107449 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 107449 ']' 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.370 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:41.629 [2024-11-20 09:20:20.315339] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:41.629 [2024-11-20 09:20:20.316696] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:41.629 [2024-11-20 09:20:20.316806] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.629 [2024-11-20 09:20:20.472294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:41.629 [2024-11-20 09:20:20.544220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.629 [2024-11-20 09:20:20.544292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.629 [2024-11-20 09:20:20.544306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.629 [2024-11-20 09:20:20.544317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.629 [2024-11-20 09:20:20.544326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.629 [2024-11-20 09:20:20.545606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.629 [2024-11-20 09:20:20.545621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.887 [2024-11-20 09:20:20.645797] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:41.888 [2024-11-20 09:20:20.646606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:41.888 [2024-11-20 09:20:20.646634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:26:41.888 5000+0 records in 00:26:41.888 5000+0 records out 00:26:41.888 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0317139 s, 323 MB/s 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:41.888 AIO0 00:26:41.888 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:42.147 [2024-11-20 09:20:20.810752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:42.147 [2024-11-20 09:20:20.842925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107449 0 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107449 0 idle 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:42.147 09:20:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107449 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.30 reactor_0' 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107449 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.30 reactor_0 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 107449 1 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107449 1 idle 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:42.147 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107453 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.00 reactor_1' 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107453 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.00 reactor_1 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=107510 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107449 0 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107449 0 busy 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:42.406 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:42.678 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107449 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.30 reactor_0' 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107449 root 20 0 64.2g 45568 33024 S 0.0 0.4 0:00.30 reactor_0 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:42.679 09:20:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:26:43.638 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:26:43.638 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:43.638 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:43.638 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107449 root 20 0 64.2g 46848 33408 R 99.9 0.4 0:01.69 reactor_0' 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107449 root 20 0 64.2g 46848 33408 R 99.9 0.4 0:01.69 reactor_0 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 107449 1 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 107449 1 busy 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:43.897 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107453 root 20 0 64.2g 46848 33408 R 66.7 0.4 0:00.82 reactor_1' 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107453 root 20 0 64.2g 46848 33408 R 66.7 0.4 0:00.82 reactor_1 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:43.898 09:20:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 107510 00:26:53.886 Initializing NVMe Controllers 00:26:53.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.886 Controller IO queue size 256, less than required. 00:26:53.886 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:53.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:53.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:53.886 Initialization complete. Launching workers. 00:26:53.886 ======================================================== 00:26:53.886 Latency(us) 00:26:53.886 Device Information : IOPS MiB/s Average min max 00:26:53.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6332.80 24.74 40491.94 6745.17 82367.93 00:26:53.886 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6612.90 25.83 38754.48 7534.37 98761.02 00:26:53.886 ======================================================== 00:26:53.886 Total : 12945.70 50.57 39604.42 6745.17 98761.02 00:26:53.886 00:26:53.886 09:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:53.886 09:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107449 0 00:26:53.886 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107449 0 idle 00:26:53.886 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:53.886 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107449 root 20 0 64.2g 46848 33408 S 0.0 0.4 0:13.62 reactor_0' 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107449 root 20 0 64.2g 46848 33408 S 0.0 0.4 0:13.62 reactor_0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 107449 1 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107449 1 idle 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107453 root 20 0 64.2g 46848 33408 S 0.0 0.4 0:06.66 reactor_1' 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107453 root 20 0 64.2g 46848 33408 S 0.0 0.4 0:06.66 reactor_1 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:53.887 09:20:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107449 0 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107449 0 idle 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:55.262 09:20:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107449 root 20 0 64.2g 48896 33408 S 6.7 0.4 0:13.68 reactor_0' 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107449 root 20 0 64.2g 48896 33408 S 6.7 0.4 0:13.68 reactor_0 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:55.262 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 107449 1 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 107449 1 idle 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=107449 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 107449 -w 256 00:26:55.263 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 107453 root 20 0 64.2g 48896 33408 S 0.0 0.4 0:06.68 reactor_1' 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 107453 root 20 0 64.2g 48896 33408 S 0.0 0.4 0:06.68 reactor_1 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:55.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:55.521 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:56.086 rmmod nvme_tcp 00:26:56.086 rmmod nvme_fabrics 00:26:56.086 rmmod nvme_keyring 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 107449 ']' 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 107449 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 107449 ']' 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 107449 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107449 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:56.086 killing process with pid 107449 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107449' 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 107449 00:26:56.086 09:20:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 107449 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@254 -- # local dev 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # continue 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # continue 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:26:56.431 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@274 -- # iptr 00:26:56.432 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:56.432 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-restore 00:26:56.432 09:20:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-save 00:26:56.432 00:26:56.432 real 0m15.698s 00:26:56.432 user 0m28.049s 00:26:56.432 sys 0m7.502s 00:26:56.432 09:20:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.432 09:20:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:56.432 ************************************ 00:26:56.432 END TEST nvmf_interrupt 00:26:56.432 ************************************ 00:26:56.432 00:26:56.432 real 20m49.089s 00:26:56.432 user 55m1.144s 00:26:56.432 sys 5m10.668s 00:26:56.432 09:20:35 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.432 09:20:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.432 ************************************ 00:26:56.432 END TEST nvmf_tcp 00:26:56.432 ************************************ 00:26:56.432 09:20:35 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:26:56.432 09:20:35 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:56.432 09:20:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:56.432 09:20:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.432 09:20:35 -- common/autotest_common.sh@10 -- # set +x 00:26:56.432 ************************************ 00:26:56.432 START TEST spdkcli_nvmf_tcp 00:26:56.432 ************************************ 00:26:56.432 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:56.690 * Looking for test storage... 00:26:56.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:56.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.690 --rc genhtml_branch_coverage=1 00:26:56.690 --rc genhtml_function_coverage=1 00:26:56.690 --rc genhtml_legend=1 00:26:56.690 --rc geninfo_all_blocks=1 00:26:56.690 --rc geninfo_unexecuted_blocks=1 00:26:56.690 00:26:56.690 ' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:56.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.690 --rc genhtml_branch_coverage=1 00:26:56.690 --rc genhtml_function_coverage=1 00:26:56.690 --rc genhtml_legend=1 00:26:56.690 --rc geninfo_all_blocks=1 00:26:56.690 --rc geninfo_unexecuted_blocks=1 00:26:56.690 00:26:56.690 ' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:56.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.690 --rc genhtml_branch_coverage=1 00:26:56.690 --rc genhtml_function_coverage=1 00:26:56.690 --rc genhtml_legend=1 00:26:56.690 --rc geninfo_all_blocks=1 00:26:56.690 --rc geninfo_unexecuted_blocks=1 00:26:56.690 00:26:56.690 ' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:56.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.690 --rc genhtml_branch_coverage=1 00:26:56.690 --rc genhtml_function_coverage=1 00:26:56.690 --rc genhtml_legend=1 00:26:56.690 --rc geninfo_all_blocks=1 00:26:56.690 --rc geninfo_unexecuted_blocks=1 00:26:56.690 00:26:56.690 ' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:56.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=107828 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 107828 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 107828 ']' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.690 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:56.948 [2024-11-20 09:20:35.633856] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:56.948 [2024-11-20 09:20:35.633987] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107828 ] 00:26:56.948 [2024-11-20 09:20:35.787293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:56.948 [2024-11-20 09:20:35.860002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.948 [2024-11-20 09:20:35.860015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.205 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.205 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:26:57.205 09:20:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:57.205 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:57.205 09:20:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:57.205 09:20:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:57.205 09:20:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:57.205 09:20:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:57.205 09:20:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.205 09:20:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:57.205 09:20:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:57.205 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:57.205 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:57.205 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:57.205 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:57.205 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:57.205 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:57.205 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:57.205 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:57.205 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:57.205 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:57.205 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:57.205 ' 00:27:00.493 [2024-11-20 09:20:38.861715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.432 [2024-11-20 09:20:40.178909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:03.963 [2024-11-20 09:20:42.620662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:05.877 [2024-11-20 09:20:44.718249] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:07.783 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:07.783 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:07.783 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:07.783 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:07.783 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:07.783 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:07.783 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:07.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:07.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:07.783 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:07.783 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:07.783 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:07.783 09:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:27:08.351 09:20:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:08.351 09:20:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:08.351 09:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:08.351 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.351 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.351 09:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:08.352 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.352 09:20:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:08.352 09:20:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:08.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:08.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:08.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:08.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:08.352 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:08.352 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:08.352 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:08.352 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:08.352 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:08.352 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:08.352 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:08.352 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:08.352 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:08.352 ' 00:27:14.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:14.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:14.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:14.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:14.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:14.918 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:14.918 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:14.918 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:14.918 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:14.918 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:14.918 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:14.918 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:14.918 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:14.918 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 107828 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 107828 ']' 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 107828 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107828 00:27:14.918 killing process with pid 107828 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:14.918 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:14.919 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107828' 00:27:14.919 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 107828 00:27:14.919 09:20:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 107828 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 107828 ']' 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 107828 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 107828 ']' 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 107828 00:27:14.919 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (107828) - No such process 00:27:14.919 Process with pid 107828 is not found 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 107828 is not found' 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:14.919 00:27:14.919 real 0m17.713s 00:27:14.919 user 0m38.582s 00:27:14.919 sys 0m0.947s 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.919 09:20:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:14.919 ************************************ 00:27:14.919 END TEST spdkcli_nvmf_tcp 00:27:14.919 ************************************ 00:27:14.919 09:20:53 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:14.919 09:20:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.919 09:20:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.919 09:20:53 -- common/autotest_common.sh@10 -- # set +x 00:27:14.919 ************************************ 00:27:14.919 START TEST nvmf_identify_passthru 00:27:14.919 ************************************ 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:14.919 * Looking for test storage... 00:27:14.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.919 --rc genhtml_branch_coverage=1 00:27:14.919 --rc genhtml_function_coverage=1 00:27:14.919 --rc genhtml_legend=1 00:27:14.919 --rc geninfo_all_blocks=1 00:27:14.919 --rc geninfo_unexecuted_blocks=1 00:27:14.919 00:27:14.919 ' 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.919 --rc genhtml_branch_coverage=1 00:27:14.919 --rc genhtml_function_coverage=1 00:27:14.919 --rc genhtml_legend=1 00:27:14.919 --rc geninfo_all_blocks=1 00:27:14.919 --rc geninfo_unexecuted_blocks=1 00:27:14.919 00:27:14.919 ' 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.919 --rc genhtml_branch_coverage=1 00:27:14.919 --rc genhtml_function_coverage=1 00:27:14.919 --rc genhtml_legend=1 00:27:14.919 --rc geninfo_all_blocks=1 00:27:14.919 --rc geninfo_unexecuted_blocks=1 00:27:14.919 00:27:14.919 ' 00:27:14.919 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:14.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.919 --rc genhtml_branch_coverage=1 00:27:14.919 --rc genhtml_function_coverage=1 00:27:14.919 --rc genhtml_legend=1 00:27:14.919 --rc geninfo_all_blocks=1 00:27:14.919 --rc geninfo_unexecuted_blocks=1 00:27:14.919 00:27:14.919 ' 00:27:14.919 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.919 09:20:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.919 09:20:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.919 09:20:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.919 09:20:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.919 09:20:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:14.919 09:20:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:27:14.919 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:14.920 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:14.920 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:14.920 09:20:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.920 09:20:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.920 09:20:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.920 09:20:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.920 09:20:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.920 09:20:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:14.920 09:20:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.920 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:14.920 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:27:14.920 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@223 -- # create_target_ns 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # return 0 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:14.920 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up target0 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:14.921 10.0.0.1 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:14.921 10.0.0.2 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@151 -- # set_up target1 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:14.921 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772163 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:14.922 10.0.0.3 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772164 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:14.922 10.0.0.4 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:14.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:14.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:27:14.922 00:27:14.922 --- 10.0.0.1 ping statistics --- 00:27:14.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.922 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target0 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:14.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:14.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.028 ms 00:27:14.922 00:27:14.922 --- 10.0.0.2 ping statistics --- 00:27:14.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.922 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator1 00:27:14.922 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:14.923 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:14.923 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:27:14.923 00:27:14.923 --- 10.0.0.3 ping statistics --- 00:27:14.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.923 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:14.923 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:14.923 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:27:14.923 00:27:14.923 --- 10.0.0.4 ping statistics --- 00:27:14.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:14.923 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@281 -- # return 0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo initiator1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target0 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:27:14.923 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo target1 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=target1 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:14.924 09:20:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:14.924 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:14.924 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:14.924 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:15.182 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:27:15.182 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:15.182 09:20:53 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:27:15.182 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:27:15.182 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:27:15.182 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:15.182 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:15.182 09:20:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:15.182 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:27:15.182 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:15.182 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:15.182 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:15.441 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:27:15.441 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.441 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.441 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=108342 00:27:15.441 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:15.441 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:15.441 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 108342 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 108342 ']' 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.441 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.441 [2024-11-20 09:20:54.338674] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:27:15.441 [2024-11-20 09:20:54.338796] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:15.700 [2024-11-20 09:20:54.490670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:15.700 [2024-11-20 09:20:54.557888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:15.700 [2024-11-20 09:20:54.557946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:15.700 [2024-11-20 09:20:54.557960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:15.700 [2024-11-20 09:20:54.557971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:15.700 [2024-11-20 09:20:54.557980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:15.700 [2024-11-20 09:20:54.559358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.700 [2024-11-20 09:20:54.559515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.700 [2024-11-20 09:20:54.559457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:15.700 [2024-11-20 09:20:54.559509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:15.700 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.700 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:27:15.700 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:15.700 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.700 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.959 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 [2024-11-20 09:20:54.735076] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.959 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 [2024-11-20 09:20:54.745293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.959 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 Nvme0n1 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.959 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.959 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.959 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.959 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:15.959 [2024-11-20 09:20:54.872150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.217 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.217 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:16.217 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.217 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:16.217 [ 00:27:16.217 { 00:27:16.217 "allow_any_host": true, 00:27:16.217 "hosts": [], 00:27:16.217 "listen_addresses": [], 00:27:16.217 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:16.217 "subtype": "Discovery" 00:27:16.217 }, 00:27:16.217 { 00:27:16.217 "allow_any_host": true, 00:27:16.217 "hosts": [], 00:27:16.217 "listen_addresses": [ 00:27:16.217 { 00:27:16.217 "adrfam": "IPv4", 00:27:16.217 "traddr": "10.0.0.2", 00:27:16.217 "trsvcid": "4420", 00:27:16.217 "trtype": "TCP" 00:27:16.217 } 00:27:16.217 ], 00:27:16.217 "max_cntlid": 65519, 00:27:16.217 "max_namespaces": 1, 00:27:16.217 "min_cntlid": 1, 00:27:16.217 "model_number": "SPDK bdev Controller", 00:27:16.217 "namespaces": [ 00:27:16.217 { 00:27:16.217 "bdev_name": "Nvme0n1", 00:27:16.217 "name": "Nvme0n1", 00:27:16.217 "nguid": "B1AF9FB0883B4BC3A55AE56A06D69678", 00:27:16.218 "nsid": 1, 00:27:16.218 "uuid": "b1af9fb0-883b-4bc3-a55a-e56a06d69678" 00:27:16.218 } 00:27:16.218 ], 00:27:16.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.218 "serial_number": "SPDK00000000000001", 00:27:16.218 "subtype": "NVMe" 00:27:16.218 } 00:27:16.218 ] 00:27:16.218 09:20:54 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.218 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:16.218 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:16.218 09:20:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:16.476 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:27:16.476 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:16.476 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:16.476 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:16.735 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:27:16.736 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:27:16.736 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:27:16.736 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.736 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:16.736 09:20:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:16.736 rmmod nvme_tcp 00:27:16.736 rmmod nvme_fabrics 00:27:16.736 rmmod nvme_keyring 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 108342 ']' 00:27:16.736 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 108342 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 108342 ']' 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 108342 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108342 00:27:16.736 killing process with pid 108342 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108342' 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 108342 00:27:16.736 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 108342 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@254 -- # local dev 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:16.995 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:27:16.995 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # continue 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # continue 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/setup.sh@274 -- # iptr 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-save 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-restore 00:27:16.995 09:20:55 nvmf_identify_passthru -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:17.255 ************************************ 00:27:17.255 END TEST nvmf_identify_passthru 00:27:17.255 ************************************ 00:27:17.255 00:27:17.255 real 0m2.818s 00:27:17.255 user 0m5.244s 00:27:17.255 sys 0m0.928s 00:27:17.255 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.255 09:20:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:17.255 09:20:55 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:17.255 09:20:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:17.255 09:20:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.255 09:20:55 -- common/autotest_common.sh@10 -- # set +x 00:27:17.255 ************************************ 00:27:17.255 START TEST nvmf_dif 00:27:17.255 ************************************ 00:27:17.255 09:20:55 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:17.255 * Looking for test storage... 00:27:17.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.255 --rc genhtml_branch_coverage=1 00:27:17.255 --rc genhtml_function_coverage=1 00:27:17.255 --rc genhtml_legend=1 00:27:17.255 --rc geninfo_all_blocks=1 00:27:17.255 --rc geninfo_unexecuted_blocks=1 00:27:17.255 00:27:17.255 ' 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.255 --rc genhtml_branch_coverage=1 00:27:17.255 --rc genhtml_function_coverage=1 00:27:17.255 --rc genhtml_legend=1 00:27:17.255 --rc geninfo_all_blocks=1 00:27:17.255 --rc geninfo_unexecuted_blocks=1 00:27:17.255 00:27:17.255 ' 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.255 --rc genhtml_branch_coverage=1 00:27:17.255 --rc genhtml_function_coverage=1 00:27:17.255 --rc genhtml_legend=1 00:27:17.255 --rc geninfo_all_blocks=1 00:27:17.255 --rc geninfo_unexecuted_blocks=1 00:27:17.255 00:27:17.255 ' 00:27:17.255 09:20:56 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:17.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.255 --rc genhtml_branch_coverage=1 00:27:17.255 --rc genhtml_function_coverage=1 00:27:17.255 --rc genhtml_legend=1 00:27:17.255 --rc geninfo_all_blocks=1 00:27:17.255 --rc geninfo_unexecuted_blocks=1 00:27:17.255 00:27:17.255 ' 00:27:17.255 09:20:56 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.255 09:20:56 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.255 09:20:56 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.255 09:20:56 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.255 09:20:56 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.255 09:20:56 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:17.255 09:20:56 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:27:17.255 09:20:56 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:17.255 09:20:56 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:17.255 09:20:56 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:17.255 09:20:56 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:17.256 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:17.256 09:20:56 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:17.256 09:20:56 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:17.256 09:20:56 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:17.256 09:20:56 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:17.256 09:20:56 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:27:17.256 09:20:56 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:17.256 09:20:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:27:17.256 09:20:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:27:17.256 09:20:56 nvmf_dif -- nvmf/common.sh@280 -- # nvmf_veth_init 00:27:17.256 09:20:56 nvmf_dif -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:27:17.256 09:20:56 nvmf_dif -- nvmf/setup.sh@223 -- # create_target_ns 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@224 -- # create_main_bridge 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@105 -- # delete_main_bridge 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:17.516 09:20:56 nvmf_dif -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator0 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target0 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0 up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target0 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:27:17.516 10.0.0.1 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:17.516 10.0.0.2 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator0 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:27:17.516 09:20:56 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target0_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:27:17.517 09:20:56 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@151 -- # set_up initiator1 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@151 -- # set_up target1 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1 up 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@152 -- # set_up target1_br 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns target1 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772163 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:27:17.517 10.0.0.3 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772164 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:27:17.517 10.0.0.4 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@66 -- # set_up initiator1 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.517 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@129 -- # set_up target1_br 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:27:17.777 09:20:56 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 2 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:17.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:17.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:27:17.777 00:27:17.777 --- 10.0.0.1 ping statistics --- 00:27:17.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.777 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:17.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:17.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:27:17.777 00:27:17.777 --- 10.0.0.2 ping statistics --- 00:27:17.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.777 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:27:17.777 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:17.777 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:27:17.777 00:27:17.777 --- 10.0.0.3 ping statistics --- 00:27:17.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.777 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:17.777 09:20:56 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:27:17.778 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:17.778 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:27:17.778 00:27:17.778 --- 10.0.0.4 ping statistics --- 00:27:17.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:17.778 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:17.778 09:20:56 nvmf_dif -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:17.778 09:20:56 nvmf_dif -- nvmf/common.sh@281 -- # return 0 00:27:17.778 09:20:56 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:27:17.778 09:20:56 nvmf_dif -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:18.037 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:18.037 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:18.037 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:18.296 09:20:56 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:18.296 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator0 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator0 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@101 -- # echo initiator1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # dev=initiator1 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@101 -- # echo target0 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target0 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:27:18.297 09:20:56 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@101 -- # echo target1 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@159 -- # dev=target1 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:18.297 09:20:57 nvmf_dif -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:18.297 09:20:57 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:18.297 09:20:57 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=108727 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:18.297 09:20:57 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 108727 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 108727 ']' 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.297 09:20:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.297 [2024-11-20 09:20:57.122958] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:27:18.297 [2024-11-20 09:20:57.123063] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.556 [2024-11-20 09:20:57.279012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.556 [2024-11-20 09:20:57.346163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.556 [2024-11-20 09:20:57.346225] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.556 [2024-11-20 09:20:57.346239] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.556 [2024-11-20 09:20:57.346249] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.556 [2024-11-20 09:20:57.346259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.556 [2024-11-20 09:20:57.346725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:27:18.886 09:20:57 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.886 09:20:57 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.886 09:20:57 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:18.886 09:20:57 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.886 [2024-11-20 09:20:57.527041] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.886 09:20:57 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.886 09:20:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:18.886 ************************************ 00:27:18.886 START TEST fio_dif_1_default 00:27:18.886 ************************************ 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:18.886 bdev_null0 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:18.886 [2024-11-20 09:20:57.575182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:27:18.886 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:18.887 { 00:27:18.887 "params": { 00:27:18.887 "name": "Nvme$subsystem", 00:27:18.887 "trtype": "$TEST_TRANSPORT", 00:27:18.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.887 "adrfam": "ipv4", 00:27:18.887 "trsvcid": "$NVMF_PORT", 00:27:18.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.887 "hdgst": ${hdgst:-false}, 00:27:18.887 "ddgst": ${ddgst:-false} 00:27:18.887 }, 00:27:18.887 "method": "bdev_nvme_attach_controller" 00:27:18.887 } 00:27:18.887 EOF 00:27:18.887 )") 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:18.887 "params": { 00:27:18.887 "name": "Nvme0", 00:27:18.887 "trtype": "tcp", 00:27:18.887 "traddr": "10.0.0.2", 00:27:18.887 "adrfam": "ipv4", 00:27:18.887 "trsvcid": "4420", 00:27:18.887 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:18.887 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:18.887 "hdgst": false, 00:27:18.887 "ddgst": false 00:27:18.887 }, 00:27:18.887 "method": "bdev_nvme_attach_controller" 00:27:18.887 }' 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:18.887 09:20:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.146 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:19.146 fio-3.35 00:27:19.146 Starting 1 thread 00:27:31.347 00:27:31.347 filename0: (groupid=0, jobs=1): err= 0: pid=108798: Wed Nov 20 09:21:08 2024 00:27:31.347 read: IOPS=359, BW=1437KiB/s (1472kB/s)(14.1MiB/10030msec) 00:27:31.347 slat (usec): min=6, max=290, avg= 9.62, stdev= 7.99 00:27:31.347 clat (usec): min=378, max=41582, avg=11102.28, stdev=17794.00 00:27:31.347 lat (usec): min=384, max=41593, avg=11111.90, stdev=17794.22 00:27:31.347 clat percentiles (usec): 00:27:31.347 | 1.00th=[ 412], 5.00th=[ 437], 10.00th=[ 457], 20.00th=[ 482], 00:27:31.347 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 553], 60.00th=[ 586], 00:27:31.347 | 70.00th=[ 668], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:27:31.347 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:27:31.347 | 99.99th=[41681] 00:27:31.347 bw ( KiB/s): min= 768, max= 4128, per=100.00%, avg=1440.00, stdev=703.85, samples=20 00:27:31.347 iops : min= 192, max= 1032, avg=360.00, stdev=175.96, samples=20 00:27:31.347 lat (usec) : 500=28.30%, 750=43.73%, 1000=1.78% 00:27:31.347 lat (msec) : 4=0.11%, 50=26.08% 00:27:31.347 cpu : usr=91.47%, sys=7.68%, ctx=81, majf=0, minf=0 00:27:31.347 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.347 issued rwts: total=3604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.347 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:31.347 00:27:31.347 Run status group 0 (all jobs): 00:27:31.347 READ: bw=1437KiB/s (1472kB/s), 1437KiB/s-1437KiB/s (1472kB/s-1472kB/s), io=14.1MiB (14.8MB), run=10030-10030msec 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:31.347 ************************************ 00:27:31.347 END TEST fio_dif_1_default 00:27:31.347 ************************************ 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.347 00:27:31.347 real 0m11.137s 00:27:31.347 user 0m9.914s 00:27:31.347 sys 0m1.034s 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.347 09:21:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:31.347 09:21:08 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:31.348 09:21:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:31.348 09:21:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 ************************************ 00:27:31.348 START TEST fio_dif_1_multi_subsystems 00:27:31.348 ************************************ 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 bdev_null0 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 [2024-11-20 09:21:08.764924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 bdev_null1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:31.348 { 00:27:31.348 "params": { 00:27:31.348 "name": "Nvme$subsystem", 00:27:31.348 "trtype": "$TEST_TRANSPORT", 00:27:31.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.348 "adrfam": "ipv4", 00:27:31.348 "trsvcid": "$NVMF_PORT", 00:27:31.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.348 "hdgst": ${hdgst:-false}, 00:27:31.348 "ddgst": ${ddgst:-false} 00:27:31.348 }, 00:27:31.348 "method": "bdev_nvme_attach_controller" 00:27:31.348 } 00:27:31.348 EOF 00:27:31.348 )") 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:31.348 { 00:27:31.348 "params": { 00:27:31.348 "name": "Nvme$subsystem", 00:27:31.348 "trtype": "$TEST_TRANSPORT", 00:27:31.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.348 "adrfam": "ipv4", 00:27:31.348 "trsvcid": "$NVMF_PORT", 00:27:31.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.348 "hdgst": ${hdgst:-false}, 00:27:31.348 "ddgst": ${ddgst:-false} 00:27:31.348 }, 00:27:31.348 "method": "bdev_nvme_attach_controller" 00:27:31.348 } 00:27:31.348 EOF 00:27:31.348 )") 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:31.348 "params": { 00:27:31.348 "name": "Nvme0", 00:27:31.348 "trtype": "tcp", 00:27:31.348 "traddr": "10.0.0.2", 00:27:31.348 "adrfam": "ipv4", 00:27:31.348 "trsvcid": "4420", 00:27:31.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.348 "hdgst": false, 00:27:31.348 "ddgst": false 00:27:31.348 }, 00:27:31.348 "method": "bdev_nvme_attach_controller" 00:27:31.348 },{ 00:27:31.348 "params": { 00:27:31.348 "name": "Nvme1", 00:27:31.348 "trtype": "tcp", 00:27:31.348 "traddr": "10.0.0.2", 00:27:31.348 "adrfam": "ipv4", 00:27:31.348 "trsvcid": "4420", 00:27:31.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:31.348 "hdgst": false, 00:27:31.348 "ddgst": false 00:27:31.348 }, 00:27:31.348 "method": "bdev_nvme_attach_controller" 00:27:31.348 }' 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:31.348 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:31.349 09:21:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.349 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:31.349 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:31.349 fio-3.35 00:27:31.349 Starting 2 threads 00:27:41.311 00:27:41.311 filename0: (groupid=0, jobs=1): err= 0: pid=108958: Wed Nov 20 09:21:19 2024 00:27:41.311 read: IOPS=155, BW=621KiB/s (636kB/s)(6224KiB/10020msec) 00:27:41.311 slat (nsec): min=6857, max=95080, avg=12129.84, stdev=8414.29 00:27:41.311 clat (usec): min=447, max=42883, avg=25717.45, stdev=19683.62 00:27:41.311 lat (usec): min=455, max=42898, avg=25729.58, stdev=19683.40 00:27:41.311 clat percentiles (usec): 00:27:41.311 | 1.00th=[ 461], 5.00th=[ 478], 10.00th=[ 494], 20.00th=[ 537], 00:27:41.311 | 30.00th=[ 734], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:27:41.311 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:27:41.311 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:41.311 | 99.99th=[42730] 00:27:41.311 bw ( KiB/s): min= 416, max= 864, per=50.56%, avg=620.80, stdev=137.89, samples=20 00:27:41.311 iops : min= 104, max= 216, avg=155.20, stdev=34.47, samples=20 00:27:41.311 lat (usec) : 500=12.02%, 750=18.06%, 1000=6.23% 00:27:41.311 lat (msec) : 2=1.74%, 50=61.95% 00:27:41.311 cpu : usr=95.33%, sys=4.21%, ctx=71, majf=0, minf=0 00:27:41.311 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.311 issued rwts: total=1556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.311 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:41.311 filename1: (groupid=0, jobs=1): err= 0: pid=108959: Wed Nov 20 09:21:19 2024 00:27:41.311 read: IOPS=151, BW=605KiB/s (620kB/s)(6064KiB/10020msec) 00:27:41.311 slat (nsec): min=7138, max=94573, avg=12180.70, stdev=7973.60 00:27:41.311 clat (usec): min=440, max=42903, avg=26396.68, stdev=19523.85 00:27:41.311 lat (usec): min=447, max=42935, avg=26408.86, stdev=19523.61 00:27:41.311 clat percentiles (usec): 00:27:41.311 | 1.00th=[ 457], 5.00th=[ 478], 10.00th=[ 498], 20.00th=[ 545], 00:27:41.311 | 30.00th=[ 791], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:27:41.311 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:27:41.311 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:27:41.311 | 99.99th=[42730] 00:27:41.311 bw ( KiB/s): min= 416, max= 800, per=49.25%, avg=604.80, stdev=115.56, samples=20 00:27:41.311 iops : min= 104, max= 200, avg=151.20, stdev=28.89, samples=20 00:27:41.311 lat (usec) : 500=10.75%, 750=18.73%, 1000=5.47% 00:27:41.311 lat (msec) : 2=1.45%, 50=63.59% 00:27:41.311 cpu : usr=95.16%, sys=4.39%, ctx=9, majf=0, minf=9 00:27:41.311 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:41.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:41.311 issued rwts: total=1516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:41.311 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:41.311 00:27:41.311 Run status group 0 (all jobs): 00:27:41.311 READ: bw=1226KiB/s (1256kB/s), 605KiB/s-621KiB/s (620kB/s-636kB/s), io=12.0MiB (12.6MB), run=10020-10020msec 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:41.311 ************************************ 00:27:41.311 END TEST fio_dif_1_multi_subsystems 00:27:41.311 ************************************ 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.311 00:27:41.311 real 0m11.322s 00:27:41.311 user 0m19.992s 00:27:41.311 sys 0m1.181s 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:41.311 09:21:20 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:41.311 09:21:20 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:41.311 09:21:20 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:41.311 09:21:20 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:41.311 09:21:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:41.311 ************************************ 00:27:41.311 START TEST fio_dif_rand_params 00:27:41.311 ************************************ 00:27:41.311 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:41.311 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:41.311 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.312 bdev_null0 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:41.312 [2024-11-20 09:21:20.134205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:41.312 { 00:27:41.312 "params": { 00:27:41.312 "name": "Nvme$subsystem", 00:27:41.312 "trtype": "$TEST_TRANSPORT", 00:27:41.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:41.312 "adrfam": "ipv4", 00:27:41.312 "trsvcid": "$NVMF_PORT", 00:27:41.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:41.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:41.312 "hdgst": ${hdgst:-false}, 00:27:41.312 "ddgst": ${ddgst:-false} 00:27:41.312 }, 00:27:41.312 "method": "bdev_nvme_attach_controller" 00:27:41.312 } 00:27:41.312 EOF 00:27:41.312 )") 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:41.312 "params": { 00:27:41.312 "name": "Nvme0", 00:27:41.312 "trtype": "tcp", 00:27:41.312 "traddr": "10.0.0.2", 00:27:41.312 "adrfam": "ipv4", 00:27:41.312 "trsvcid": "4420", 00:27:41.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:41.312 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:41.312 "hdgst": false, 00:27:41.312 "ddgst": false 00:27:41.312 }, 00:27:41.312 "method": "bdev_nvme_attach_controller" 00:27:41.312 }' 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:41.312 09:21:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:41.570 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:41.570 ... 00:27:41.570 fio-3.35 00:27:41.570 Starting 3 threads 00:27:48.160 00:27:48.160 filename0: (groupid=0, jobs=1): err= 0: pid=109110: Wed Nov 20 09:21:25 2024 00:27:48.160 read: IOPS=245, BW=30.6MiB/s (32.1MB/s)(153MiB/5006msec) 00:27:48.160 slat (nsec): min=7712, max=41209, avg=12641.65, stdev=3325.12 00:27:48.160 clat (usec): min=5374, max=52905, avg=12220.75, stdev=5020.18 00:27:48.160 lat (usec): min=5385, max=52918, avg=12233.39, stdev=5020.37 00:27:48.160 clat percentiles (usec): 00:27:48.160 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[10814], 00:27:48.160 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11994], 00:27:48.160 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13042], 95.00th=[13960], 00:27:48.160 | 99.00th=[50070], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:27:48.160 | 99.99th=[52691] 00:27:48.160 bw ( KiB/s): min=25344, max=35072, per=36.95%, avg=31360.00, stdev=2825.68, samples=10 00:27:48.160 iops : min= 198, max= 274, avg=245.00, stdev=22.08, samples=10 00:27:48.160 lat (msec) : 10=11.08%, 20=87.20%, 50=0.65%, 100=1.06% 00:27:48.160 cpu : usr=92.11%, sys=6.21%, ctx=24, majf=0, minf=0 00:27:48.160 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.160 issued rwts: total=1227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.160 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:48.160 filename0: (groupid=0, jobs=1): err= 0: pid=109111: Wed Nov 20 09:21:25 2024 00:27:48.160 read: IOPS=194, BW=24.4MiB/s (25.5MB/s)(122MiB/5005msec) 00:27:48.160 slat (nsec): min=7513, max=37088, avg=12520.12, stdev=3657.10 00:27:48.160 clat (usec): min=4138, max=52411, avg=15375.27, stdev=4069.76 00:27:48.161 lat (usec): min=4146, max=52419, avg=15387.79, stdev=4070.22 00:27:48.161 clat percentiles (usec): 00:27:48.161 | 1.00th=[ 4228], 5.00th=[ 5800], 10.00th=[ 9896], 20.00th=[13698], 00:27:48.161 | 30.00th=[14746], 40.00th=[15795], 50.00th=[16450], 60.00th=[16909], 00:27:48.161 | 70.00th=[17171], 80.00th=[17695], 90.00th=[18220], 95.00th=[19006], 00:27:48.161 | 99.00th=[21627], 99.50th=[22414], 99.90th=[52167], 99.95th=[52167], 00:27:48.161 | 99.99th=[52167] 00:27:48.161 bw ( KiB/s): min=22016, max=30012, per=29.36%, avg=24914.80, stdev=2920.57, samples=10 00:27:48.161 iops : min= 172, max= 234, avg=194.60, stdev=22.73, samples=10 00:27:48.161 lat (msec) : 10=10.05%, 20=87.69%, 50=1.95%, 100=0.31% 00:27:48.161 cpu : usr=92.53%, sys=6.04%, ctx=24, majf=0, minf=0 00:27:48.161 IO depths : 1=10.6%, 2=89.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.161 issued rwts: total=975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.161 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:48.161 filename0: (groupid=0, jobs=1): err= 0: pid=109112: Wed Nov 20 09:21:25 2024 00:27:48.161 read: IOPS=223, BW=27.9MiB/s (29.3MB/s)(140MiB/5005msec) 00:27:48.161 slat (nsec): min=7476, max=47488, avg=12797.94, stdev=3390.21 00:27:48.161 clat (usec): min=3839, max=56594, avg=13422.50, stdev=5798.77 00:27:48.161 lat (usec): min=3850, max=56607, avg=13435.29, stdev=5798.81 00:27:48.161 clat percentiles (usec): 00:27:48.161 | 1.00th=[ 4817], 5.00th=[ 8717], 10.00th=[10945], 20.00th=[11731], 00:27:48.161 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13173], 00:27:48.161 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14746], 95.00th=[15664], 00:27:48.161 | 99.00th=[52691], 99.50th=[54789], 99.90th=[56361], 99.95th=[56361], 00:27:48.161 | 99.99th=[56361] 00:27:48.161 bw ( KiB/s): min=23808, max=33024, per=33.58%, avg=28501.33, stdev=2582.30, samples=9 00:27:48.161 iops : min= 186, max= 258, avg=222.67, stdev=20.17, samples=9 00:27:48.161 lat (msec) : 4=0.18%, 10=6.27%, 20=91.41%, 50=0.72%, 100=1.43% 00:27:48.161 cpu : usr=92.01%, sys=6.45%, ctx=4, majf=0, minf=0 00:27:48.161 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:48.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:48.161 issued rwts: total=1117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:48.161 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:48.161 00:27:48.161 Run status group 0 (all jobs): 00:27:48.161 READ: bw=82.9MiB/s (86.9MB/s), 24.4MiB/s-30.6MiB/s (25.5MB/s-32.1MB/s), io=415MiB (435MB), run=5005-5006msec 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 bdev_null0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 [2024-11-20 09:21:26.270202] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 bdev_null1 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 bdev_null2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:27:48.161 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:48.162 { 00:27:48.162 "params": { 00:27:48.162 "name": "Nvme$subsystem", 00:27:48.162 "trtype": "$TEST_TRANSPORT", 00:27:48.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.162 "adrfam": "ipv4", 00:27:48.162 "trsvcid": "$NVMF_PORT", 00:27:48.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.162 "hdgst": ${hdgst:-false}, 00:27:48.162 "ddgst": ${ddgst:-false} 00:27:48.162 }, 00:27:48.162 "method": "bdev_nvme_attach_controller" 00:27:48.162 } 00:27:48.162 EOF 00:27:48.162 )") 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:48.162 { 00:27:48.162 "params": { 00:27:48.162 "name": "Nvme$subsystem", 00:27:48.162 "trtype": "$TEST_TRANSPORT", 00:27:48.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.162 "adrfam": "ipv4", 00:27:48.162 "trsvcid": "$NVMF_PORT", 00:27:48.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.162 "hdgst": ${hdgst:-false}, 00:27:48.162 "ddgst": ${ddgst:-false} 00:27:48.162 }, 00:27:48.162 "method": "bdev_nvme_attach_controller" 00:27:48.162 } 00:27:48.162 EOF 00:27:48.162 )") 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:48.162 { 00:27:48.162 "params": { 00:27:48.162 "name": "Nvme$subsystem", 00:27:48.162 "trtype": "$TEST_TRANSPORT", 00:27:48.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:48.162 "adrfam": "ipv4", 00:27:48.162 "trsvcid": "$NVMF_PORT", 00:27:48.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:48.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:48.162 "hdgst": ${hdgst:-false}, 00:27:48.162 "ddgst": ${ddgst:-false} 00:27:48.162 }, 00:27:48.162 "method": "bdev_nvme_attach_controller" 00:27:48.162 } 00:27:48.162 EOF 00:27:48.162 )") 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:48.162 "params": { 00:27:48.162 "name": "Nvme0", 00:27:48.162 "trtype": "tcp", 00:27:48.162 "traddr": "10.0.0.2", 00:27:48.162 "adrfam": "ipv4", 00:27:48.162 "trsvcid": "4420", 00:27:48.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:48.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:48.162 "hdgst": false, 00:27:48.162 "ddgst": false 00:27:48.162 }, 00:27:48.162 "method": "bdev_nvme_attach_controller" 00:27:48.162 },{ 00:27:48.162 "params": { 00:27:48.162 "name": "Nvme1", 00:27:48.162 "trtype": "tcp", 00:27:48.162 "traddr": "10.0.0.2", 00:27:48.162 "adrfam": "ipv4", 00:27:48.162 "trsvcid": "4420", 00:27:48.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:48.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:48.162 "hdgst": false, 00:27:48.162 "ddgst": false 00:27:48.162 }, 00:27:48.162 "method": "bdev_nvme_attach_controller" 00:27:48.162 },{ 00:27:48.162 "params": { 00:27:48.162 "name": "Nvme2", 00:27:48.162 "trtype": "tcp", 00:27:48.162 "traddr": "10.0.0.2", 00:27:48.162 "adrfam": "ipv4", 00:27:48.162 "trsvcid": "4420", 00:27:48.162 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:48.162 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:48.162 "hdgst": false, 00:27:48.162 "ddgst": false 00:27:48.162 }, 00:27:48.162 "method": "bdev_nvme_attach_controller" 00:27:48.162 }' 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:48.162 09:21:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:48.162 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:48.162 ... 00:27:48.162 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:48.162 ... 00:27:48.162 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:48.162 ... 00:27:48.162 fio-3.35 00:27:48.162 Starting 24 threads 00:28:00.360 00:28:00.360 filename0: (groupid=0, jobs=1): err= 0: pid=109216: Wed Nov 20 09:21:37 2024 00:28:00.360 read: IOPS=181, BW=725KiB/s (743kB/s)(7264KiB/10015msec) 00:28:00.360 slat (usec): min=5, max=8022, avg=26.30, stdev=308.49 00:28:00.360 clat (msec): min=15, max=183, avg=88.10, stdev=25.83 00:28:00.360 lat (msec): min=15, max=183, avg=88.13, stdev=25.82 00:28:00.360 clat percentiles (msec): 00:28:00.360 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 58], 20.00th=[ 70], 00:28:00.360 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 95], 00:28:00.360 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 136], 00:28:00.360 | 99.00th=[ 161], 99.50th=[ 178], 99.90th=[ 184], 99.95th=[ 184], 00:28:00.360 | 99.99th=[ 184] 00:28:00.360 bw ( KiB/s): min= 512, max= 896, per=3.79%, avg=703.26, stdev=130.30, samples=19 00:28:00.360 iops : min= 128, max= 224, avg=175.74, stdev=32.55, samples=19 00:28:00.360 lat (msec) : 20=0.17%, 50=5.51%, 100=61.34%, 250=32.98% 00:28:00.360 cpu : usr=43.84%, sys=1.12%, ctx=1218, majf=0, minf=9 00:28:00.360 IO depths : 1=1.9%, 2=4.6%, 4=14.0%, 8=68.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:28:00.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.360 filename0: (groupid=0, jobs=1): err= 0: pid=109217: Wed Nov 20 09:21:37 2024 00:28:00.360 read: IOPS=172, BW=691KiB/s (707kB/s)(6912KiB/10008msec) 00:28:00.360 slat (usec): min=4, max=8022, avg=17.93, stdev=215.51 00:28:00.360 clat (msec): min=8, max=202, avg=92.52, stdev=30.37 00:28:00.360 lat (msec): min=8, max=202, avg=92.53, stdev=30.38 00:28:00.360 clat percentiles (msec): 00:28:00.360 | 1.00th=[ 13], 5.00th=[ 57], 10.00th=[ 63], 20.00th=[ 71], 00:28:00.360 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 87], 60.00th=[ 100], 00:28:00.360 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 131], 95.00th=[ 144], 00:28:00.360 | 99.00th=[ 190], 99.50th=[ 201], 99.90th=[ 203], 99.95th=[ 203], 00:28:00.360 | 99.99th=[ 203] 00:28:00.360 bw ( KiB/s): min= 440, max= 944, per=3.56%, avg=660.21, stdev=148.07, samples=19 00:28:00.360 iops : min= 110, max= 236, avg=165.05, stdev=37.02, samples=19 00:28:00.360 lat (msec) : 10=0.93%, 20=1.85%, 50=0.23%, 100=57.87%, 250=39.12% 00:28:00.360 cpu : usr=41.21%, sys=1.17%, ctx=1182, majf=0, minf=9 00:28:00.360 IO depths : 1=3.1%, 2=6.9%, 4=17.8%, 8=62.3%, 16=9.9%, 32=0.0%, >=64=0.0% 00:28:00.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.360 filename0: (groupid=0, jobs=1): err= 0: pid=109218: Wed Nov 20 09:21:37 2024 00:28:00.360 read: IOPS=200, BW=803KiB/s (823kB/s)(8056KiB/10029msec) 00:28:00.360 slat (usec): min=7, max=7579, avg=16.82, stdev=190.86 00:28:00.360 clat (msec): min=14, max=189, avg=79.49, stdev=31.80 00:28:00.360 lat (msec): min=14, max=189, avg=79.51, stdev=31.80 00:28:00.360 clat percentiles (msec): 00:28:00.360 | 1.00th=[ 23], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 52], 00:28:00.360 | 30.00th=[ 58], 40.00th=[ 68], 50.00th=[ 74], 60.00th=[ 80], 00:28:00.360 | 70.00th=[ 89], 80.00th=[ 106], 90.00th=[ 136], 95.00th=[ 146], 00:28:00.360 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 190], 99.95th=[ 190], 00:28:00.360 | 99.99th=[ 190] 00:28:00.360 bw ( KiB/s): min= 512, max= 1088, per=4.32%, avg=801.20, stdev=207.53, samples=20 00:28:00.360 iops : min= 128, max= 272, avg=200.25, stdev=51.84, samples=20 00:28:00.360 lat (msec) : 20=0.79%, 50=18.37%, 100=55.86%, 250=24.98% 00:28:00.360 cpu : usr=37.44%, sys=1.08%, ctx=1400, majf=0, minf=9 00:28:00.360 IO depths : 1=0.7%, 2=2.1%, 4=11.1%, 8=73.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:28:00.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.360 filename0: (groupid=0, jobs=1): err= 0: pid=109219: Wed Nov 20 09:21:37 2024 00:28:00.360 read: IOPS=207, BW=830KiB/s (850kB/s)(8316KiB/10022msec) 00:28:00.360 slat (usec): min=6, max=4021, avg=12.65, stdev=88.08 00:28:00.360 clat (msec): min=32, max=186, avg=76.99, stdev=19.39 00:28:00.360 lat (msec): min=32, max=186, avg=77.00, stdev=19.39 00:28:00.360 clat percentiles (msec): 00:28:00.360 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 62], 00:28:00.360 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 80], 00:28:00.360 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 112], 00:28:00.360 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 188], 99.95th=[ 188], 00:28:00.360 | 99.99th=[ 188] 00:28:00.360 bw ( KiB/s): min= 720, max= 976, per=4.46%, avg=828.26, stdev=72.61, samples=19 00:28:00.360 iops : min= 180, max= 244, avg=207.05, stdev=18.16, samples=19 00:28:00.360 lat (msec) : 50=8.37%, 100=80.90%, 250=10.73% 00:28:00.360 cpu : usr=40.18%, sys=1.06%, ctx=1110, majf=0, minf=9 00:28:00.360 IO depths : 1=0.9%, 2=1.9%, 4=8.4%, 8=76.1%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:00.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.360 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.360 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.360 filename0: (groupid=0, jobs=1): err= 0: pid=109220: Wed Nov 20 09:21:37 2024 00:28:00.360 read: IOPS=180, BW=724KiB/s (741kB/s)(7244KiB/10012msec) 00:28:00.360 slat (usec): min=3, max=4023, avg=15.39, stdev=133.27 00:28:00.360 clat (msec): min=12, max=190, avg=88.36, stdev=27.91 00:28:00.360 lat (msec): min=12, max=190, avg=88.38, stdev=27.91 00:28:00.360 clat percentiles (msec): 00:28:00.361 | 1.00th=[ 20], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 68], 00:28:00.361 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 96], 00:28:00.361 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 138], 00:28:00.361 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 190], 99.95th=[ 190], 00:28:00.361 | 99.99th=[ 190] 00:28:00.361 bw ( KiB/s): min= 496, max= 1024, per=3.86%, avg=717.65, stdev=152.78, samples=20 00:28:00.361 iops : min= 124, max= 256, avg=179.35, stdev=38.19, samples=20 00:28:00.361 lat (msec) : 20=1.77%, 50=6.02%, 100=58.59%, 250=33.63% 00:28:00.361 cpu : usr=40.54%, sys=1.16%, ctx=1154, majf=0, minf=9 00:28:00.361 IO depths : 1=2.3%, 2=5.2%, 4=13.6%, 8=68.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:28:00.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 issued rwts: total=1811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.361 filename0: (groupid=0, jobs=1): err= 0: pid=109221: Wed Nov 20 09:21:37 2024 00:28:00.361 read: IOPS=205, BW=821KiB/s (841kB/s)(8240KiB/10034msec) 00:28:00.361 slat (nsec): min=4375, max=71330, avg=10461.22, stdev=4230.71 00:28:00.361 clat (msec): min=10, max=159, avg=77.79, stdev=32.93 00:28:00.361 lat (msec): min=10, max=159, avg=77.80, stdev=32.93 00:28:00.361 clat percentiles (msec): 00:28:00.361 | 1.00th=[ 20], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 48], 00:28:00.361 | 30.00th=[ 55], 40.00th=[ 62], 50.00th=[ 71], 60.00th=[ 81], 00:28:00.361 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 129], 95.00th=[ 142], 00:28:00.361 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:28:00.361 | 99.99th=[ 161] 00:28:00.361 bw ( KiB/s): min= 432, max= 1293, per=4.41%, avg=819.00, stdev=274.58, samples=20 00:28:00.361 iops : min= 108, max= 323, avg=204.70, stdev=68.58, samples=20 00:28:00.361 lat (msec) : 20=1.26%, 50=22.77%, 100=48.83%, 250=27.14% 00:28:00.361 cpu : usr=34.99%, sys=1.05%, ctx=1553, majf=0, minf=9 00:28:00.361 IO depths : 1=0.2%, 2=0.7%, 4=5.8%, 8=78.7%, 16=14.6%, 32=0.0%, >=64=0.0% 00:28:00.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 complete : 0=0.0%, 4=89.5%, 8=7.2%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.361 filename0: (groupid=0, jobs=1): err= 0: pid=109222: Wed Nov 20 09:21:37 2024 00:28:00.361 read: IOPS=170, BW=683KiB/s (699kB/s)(6840KiB/10016msec) 00:28:00.361 slat (usec): min=4, max=8021, avg=21.27, stdev=273.84 00:28:00.361 clat (msec): min=15, max=191, avg=93.54, stdev=27.87 00:28:00.361 lat (msec): min=15, max=191, avg=93.56, stdev=27.86 00:28:00.361 clat percentiles (msec): 00:28:00.361 | 1.00th=[ 40], 5.00th=[ 53], 10.00th=[ 64], 20.00th=[ 71], 00:28:00.361 | 30.00th=[ 75], 40.00th=[ 84], 50.00th=[ 88], 60.00th=[ 105], 00:28:00.361 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 123], 95.00th=[ 146], 00:28:00.361 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 192], 99.95th=[ 192], 00:28:00.361 | 99.99th=[ 192] 00:28:00.361 bw ( KiB/s): min= 512, max= 896, per=3.59%, avg=666.95, stdev=138.72, samples=19 00:28:00.361 iops : min= 128, max= 224, avg=166.74, stdev=34.68, samples=19 00:28:00.361 lat (msec) : 20=0.82%, 50=3.63%, 100=53.80%, 250=41.75% 00:28:00.361 cpu : usr=36.41%, sys=1.05%, ctx=1019, majf=0, minf=9 00:28:00.361 IO depths : 1=3.5%, 2=7.3%, 4=19.2%, 8=60.7%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:00.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 issued rwts: total=1710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.361 filename0: (groupid=0, jobs=1): err= 0: pid=109223: Wed Nov 20 09:21:37 2024 00:28:00.361 read: IOPS=176, BW=707KiB/s (724kB/s)(7072KiB/10002msec) 00:28:00.361 slat (usec): min=4, max=8027, avg=16.33, stdev=190.70 00:28:00.361 clat (msec): min=34, max=191, avg=90.36, stdev=29.66 00:28:00.361 lat (msec): min=34, max=191, avg=90.37, stdev=29.66 00:28:00.361 clat percentiles (msec): 00:28:00.361 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:28:00.361 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 86], 60.00th=[ 96], 00:28:00.361 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 132], 95.00th=[ 148], 00:28:00.361 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:28:00.361 | 99.99th=[ 192] 00:28:00.361 bw ( KiB/s): min= 512, max= 944, per=3.76%, avg=697.26, stdev=160.94, samples=19 00:28:00.361 iops : min= 128, max= 236, avg=174.32, stdev=40.23, samples=19 00:28:00.361 lat (msec) : 50=8.82%, 100=55.32%, 250=35.86% 00:28:00.361 cpu : usr=32.39%, sys=0.79%, ctx=917, majf=0, minf=9 00:28:00.361 IO depths : 1=2.5%, 2=5.1%, 4=14.9%, 8=66.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:28:00.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 complete : 0=0.0%, 4=90.7%, 8=4.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 issued rwts: total=1768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.361 filename1: (groupid=0, jobs=1): err= 0: pid=109224: Wed Nov 20 09:21:37 2024 00:28:00.361 read: IOPS=172, BW=691KiB/s (708kB/s)(6912KiB/10003msec) 00:28:00.361 slat (nsec): min=4918, max=42301, avg=11309.65, stdev=4268.68 00:28:00.361 clat (msec): min=7, max=188, avg=92.52, stdev=29.81 00:28:00.361 lat (msec): min=7, max=188, avg=92.54, stdev=29.81 00:28:00.361 clat percentiles (msec): 00:28:00.361 | 1.00th=[ 11], 5.00th=[ 48], 10.00th=[ 65], 20.00th=[ 71], 00:28:00.361 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 89], 60.00th=[ 104], 00:28:00.361 | 70.00th=[ 107], 80.00th=[ 113], 90.00th=[ 129], 95.00th=[ 150], 00:28:00.361 | 99.00th=[ 178], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 190], 00:28:00.361 | 99.99th=[ 190] 00:28:00.361 bw ( KiB/s): min= 512, max= 896, per=3.55%, avg=659.47, stdev=150.60, samples=19 00:28:00.361 iops : min= 128, max= 224, avg=164.84, stdev=37.66, samples=19 00:28:00.361 lat (msec) : 10=0.93%, 20=0.93%, 50=3.30%, 100=52.43%, 250=42.42% 00:28:00.361 cpu : usr=44.56%, sys=1.22%, ctx=1198, majf=0, minf=9 00:28:00.361 IO depths : 1=3.3%, 2=7.1%, 4=18.8%, 8=61.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:28:00.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.361 filename1: (groupid=0, jobs=1): err= 0: pid=109225: Wed Nov 20 09:21:37 2024 00:28:00.361 read: IOPS=204, BW=819KiB/s (839kB/s)(8204KiB/10012msec) 00:28:00.361 slat (usec): min=3, max=4783, avg=19.37, stdev=186.22 00:28:00.361 clat (msec): min=24, max=168, avg=77.96, stdev=23.99 00:28:00.361 lat (msec): min=24, max=168, avg=77.97, stdev=23.99 00:28:00.361 clat percentiles (msec): 00:28:00.361 | 1.00th=[ 38], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 56], 00:28:00.361 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:28:00.361 | 70.00th=[ 88], 80.00th=[ 101], 90.00th=[ 110], 95.00th=[ 123], 00:28:00.361 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:28:00.361 | 99.99th=[ 169] 00:28:00.361 bw ( KiB/s): min= 640, max= 1048, per=4.31%, avg=799.58, stdev=129.75, samples=19 00:28:00.361 iops : min= 160, max= 262, avg=199.89, stdev=32.44, samples=19 00:28:00.361 lat (msec) : 50=14.19%, 100=65.04%, 250=20.77% 00:28:00.361 cpu : usr=38.50%, sys=1.18%, ctx=1366, majf=0, minf=9 00:28:00.361 IO depths : 1=1.2%, 2=4.0%, 4=13.2%, 8=69.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:00.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 complete : 0=0.0%, 4=91.1%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.361 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.361 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.361 filename1: (groupid=0, jobs=1): err= 0: pid=109226: Wed Nov 20 09:21:37 2024 00:28:00.361 read: IOPS=191, BW=766KiB/s (784kB/s)(7664KiB/10005msec) 00:28:00.361 slat (usec): min=4, max=8022, avg=18.56, stdev=243.31 00:28:00.361 clat (msec): min=5, max=203, avg=83.42, stdev=32.29 00:28:00.361 lat (msec): min=5, max=203, avg=83.44, stdev=32.29 00:28:00.361 clat percentiles (msec): 00:28:00.362 | 1.00th=[ 8], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 55], 00:28:00.362 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 88], 00:28:00.362 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 127], 95.00th=[ 144], 00:28:00.362 | 99.00th=[ 169], 99.50th=[ 171], 99.90th=[ 205], 99.95th=[ 205], 00:28:00.362 | 99.99th=[ 205] 00:28:00.362 bw ( KiB/s): min= 384, max= 1168, per=3.97%, avg=736.00, stdev=212.03, samples=19 00:28:00.362 iops : min= 96, max= 292, avg=184.00, stdev=53.01, samples=19 00:28:00.362 lat (msec) : 10=1.15%, 20=1.67%, 50=11.38%, 100=53.91%, 250=31.89% 00:28:00.362 cpu : usr=34.74%, sys=1.08%, ctx=1123, majf=0, minf=9 00:28:00.362 IO depths : 1=1.2%, 2=2.6%, 4=9.4%, 8=74.6%, 16=12.2%, 32=0.0%, >=64=0.0% 00:28:00.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.362 filename1: (groupid=0, jobs=1): err= 0: pid=109227: Wed Nov 20 09:21:37 2024 00:28:00.362 read: IOPS=228, BW=915KiB/s (937kB/s)(9188KiB/10042msec) 00:28:00.362 slat (usec): min=3, max=4018, avg=12.48, stdev=83.73 00:28:00.362 clat (msec): min=3, max=153, avg=69.73, stdev=26.55 00:28:00.362 lat (msec): min=3, max=153, avg=69.74, stdev=26.55 00:28:00.362 clat percentiles (msec): 00:28:00.362 | 1.00th=[ 6], 5.00th=[ 32], 10.00th=[ 43], 20.00th=[ 48], 00:28:00.362 | 30.00th=[ 56], 40.00th=[ 61], 50.00th=[ 71], 60.00th=[ 73], 00:28:00.362 | 70.00th=[ 79], 80.00th=[ 85], 90.00th=[ 106], 95.00th=[ 121], 00:28:00.362 | 99.00th=[ 144], 99.50th=[ 155], 99.90th=[ 155], 99.95th=[ 155], 00:28:00.362 | 99.99th=[ 155] 00:28:00.362 bw ( KiB/s): min= 640, max= 1408, per=4.92%, avg=912.40, stdev=210.45, samples=20 00:28:00.362 iops : min= 160, max= 352, avg=228.10, stdev=52.61, samples=20 00:28:00.362 lat (msec) : 4=0.70%, 10=0.87%, 20=1.22%, 50=19.55%, 100=65.74% 00:28:00.362 lat (msec) : 250=11.93% 00:28:00.362 cpu : usr=42.66%, sys=1.23%, ctx=1092, majf=0, minf=0 00:28:00.362 IO depths : 1=2.0%, 2=4.2%, 4=13.0%, 8=69.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:28:00.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 complete : 0=0.0%, 4=90.7%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.362 filename1: (groupid=0, jobs=1): err= 0: pid=109228: Wed Nov 20 09:21:37 2024 00:28:00.362 read: IOPS=182, BW=731KiB/s (749kB/s)(7332KiB/10029msec) 00:28:00.362 slat (usec): min=4, max=555, avg=11.85, stdev=13.65 00:28:00.362 clat (msec): min=23, max=191, avg=87.37, stdev=26.45 00:28:00.362 lat (msec): min=23, max=191, avg=87.38, stdev=26.45 00:28:00.362 clat percentiles (msec): 00:28:00.362 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 66], 00:28:00.362 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 96], 00:28:00.362 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 136], 00:28:00.362 | 99.00th=[ 153], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 192], 00:28:00.362 | 99.99th=[ 192] 00:28:00.362 bw ( KiB/s): min= 512, max= 920, per=3.86%, avg=717.89, stdev=143.60, samples=19 00:28:00.362 iops : min= 128, max= 230, avg=179.47, stdev=35.90, samples=19 00:28:00.362 lat (msec) : 50=8.46%, 100=58.05%, 250=33.50% 00:28:00.362 cpu : usr=41.00%, sys=1.11%, ctx=1128, majf=0, minf=10 00:28:00.362 IO depths : 1=3.5%, 2=7.2%, 4=16.9%, 8=62.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:00.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 complete : 0=0.0%, 4=91.8%, 8=2.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 issued rwts: total=1833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.362 filename1: (groupid=0, jobs=1): err= 0: pid=109229: Wed Nov 20 09:21:37 2024 00:28:00.362 read: IOPS=214, BW=858KiB/s (878kB/s)(8612KiB/10039msec) 00:28:00.362 slat (usec): min=4, max=3900, avg=13.15, stdev=83.95 00:28:00.362 clat (msec): min=16, max=182, avg=74.50, stdev=22.57 00:28:00.362 lat (msec): min=16, max=182, avg=74.51, stdev=22.57 00:28:00.362 clat percentiles (msec): 00:28:00.362 | 1.00th=[ 23], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:28:00.362 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 79], 00:28:00.362 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 111], 00:28:00.362 | 99.00th=[ 134], 99.50th=[ 182], 99.90th=[ 182], 99.95th=[ 182], 00:28:00.362 | 99.99th=[ 182] 00:28:00.362 bw ( KiB/s): min= 720, max= 1080, per=4.60%, avg=854.40, stdev=96.72, samples=20 00:28:00.362 iops : min= 180, max= 270, avg=213.60, stdev=24.18, samples=20 00:28:00.362 lat (msec) : 20=0.74%, 50=13.61%, 100=74.27%, 250=11.38% 00:28:00.362 cpu : usr=37.67%, sys=0.99%, ctx=1184, majf=0, minf=9 00:28:00.362 IO depths : 1=0.9%, 2=2.0%, 4=8.5%, 8=75.6%, 16=13.0%, 32=0.0%, >=64=0.0% 00:28:00.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 complete : 0=0.0%, 4=89.7%, 8=6.0%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 issued rwts: total=2153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.362 filename1: (groupid=0, jobs=1): err= 0: pid=109230: Wed Nov 20 09:21:37 2024 00:28:00.362 read: IOPS=201, BW=807KiB/s (827kB/s)(8116KiB/10054msec) 00:28:00.362 slat (usec): min=4, max=8022, avg=23.15, stdev=307.91 00:28:00.362 clat (msec): min=7, max=168, avg=79.09, stdev=27.51 00:28:00.362 lat (msec): min=7, max=168, avg=79.12, stdev=27.51 00:28:00.362 clat percentiles (msec): 00:28:00.362 | 1.00th=[ 14], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 59], 00:28:00.362 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:28:00.362 | 70.00th=[ 88], 80.00th=[ 101], 90.00th=[ 121], 95.00th=[ 131], 00:28:00.362 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 169], 99.95th=[ 169], 00:28:00.362 | 99.99th=[ 169] 00:28:00.362 bw ( KiB/s): min= 512, max= 1280, per=4.33%, avg=805.00, stdev=174.69, samples=20 00:28:00.362 iops : min= 128, max= 320, avg=201.25, stdev=43.67, samples=20 00:28:00.362 lat (msec) : 10=0.79%, 20=0.79%, 50=12.07%, 100=66.58%, 250=19.76% 00:28:00.362 cpu : usr=33.95%, sys=1.00%, ctx=918, majf=0, minf=9 00:28:00.362 IO depths : 1=1.6%, 2=3.4%, 4=11.5%, 8=71.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:28:00.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 complete : 0=0.0%, 4=90.4%, 8=5.1%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.362 filename1: (groupid=0, jobs=1): err= 0: pid=109231: Wed Nov 20 09:21:37 2024 00:28:00.362 read: IOPS=171, BW=685KiB/s (701kB/s)(6848KiB/10002msec) 00:28:00.362 slat (usec): min=4, max=8021, avg=16.33, stdev=193.63 00:28:00.362 clat (msec): min=3, max=194, avg=93.38, stdev=30.61 00:28:00.362 lat (msec): min=3, max=194, avg=93.39, stdev=30.60 00:28:00.362 clat percentiles (msec): 00:28:00.362 | 1.00th=[ 9], 5.00th=[ 48], 10.00th=[ 64], 20.00th=[ 71], 00:28:00.362 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 103], 00:28:00.362 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 129], 95.00th=[ 148], 00:28:00.362 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 194], 99.95th=[ 194], 00:28:00.362 | 99.99th=[ 194] 00:28:00.362 bw ( KiB/s): min= 384, max= 896, per=3.55%, avg=659.68, stdev=145.47, samples=19 00:28:00.362 iops : min= 96, max= 224, avg=164.89, stdev=36.35, samples=19 00:28:00.362 lat (msec) : 4=0.93%, 10=0.93%, 20=0.93%, 50=2.86%, 100=49.30% 00:28:00.362 lat (msec) : 250=45.04% 00:28:00.362 cpu : usr=37.02%, sys=0.88%, ctx=1063, majf=0, minf=9 00:28:00.362 IO depths : 1=3.2%, 2=7.0%, 4=18.3%, 8=62.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:28:00.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 complete : 0=0.0%, 4=92.0%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.362 issued rwts: total=1712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.362 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.362 filename2: (groupid=0, jobs=1): err= 0: pid=109232: Wed Nov 20 09:21:37 2024 00:28:00.362 read: IOPS=209, BW=837KiB/s (857kB/s)(8396KiB/10036msec) 00:28:00.362 slat (usec): min=4, max=8050, avg=22.77, stdev=303.06 00:28:00.362 clat (msec): min=11, max=179, avg=76.25, stdev=27.93 00:28:00.362 lat (msec): min=11, max=179, avg=76.27, stdev=27.94 00:28:00.362 clat percentiles (msec): 00:28:00.363 | 1.00th=[ 17], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 52], 00:28:00.363 | 30.00th=[ 60], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:28:00.363 | 70.00th=[ 88], 80.00th=[ 108], 90.00th=[ 109], 95.00th=[ 122], 00:28:00.363 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 180], 99.95th=[ 180], 00:28:00.363 | 99.99th=[ 180] 00:28:00.363 bw ( KiB/s): min= 560, max= 1280, per=4.49%, avg=833.20, stdev=206.43, samples=20 00:28:00.363 iops : min= 140, max= 320, avg=208.30, stdev=51.61, samples=20 00:28:00.363 lat (msec) : 20=1.52%, 50=17.39%, 100=57.79%, 250=23.30% 00:28:00.363 cpu : usr=36.37%, sys=1.00%, ctx=982, majf=0, minf=9 00:28:00.363 IO depths : 1=1.6%, 2=4.1%, 4=13.3%, 8=69.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:28:00.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.363 filename2: (groupid=0, jobs=1): err= 0: pid=109233: Wed Nov 20 09:21:37 2024 00:28:00.363 read: IOPS=202, BW=812KiB/s (831kB/s)(8148KiB/10036msec) 00:28:00.363 slat (nsec): min=3832, max=38460, avg=10810.74, stdev=4176.79 00:28:00.363 clat (msec): min=22, max=197, avg=78.73, stdev=25.96 00:28:00.363 lat (msec): min=22, max=197, avg=78.74, stdev=25.96 00:28:00.363 clat percentiles (msec): 00:28:00.363 | 1.00th=[ 24], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 56], 00:28:00.363 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:28:00.363 | 70.00th=[ 89], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 127], 00:28:00.363 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 199], 99.95th=[ 199], 00:28:00.363 | 99.99th=[ 199] 00:28:00.363 bw ( KiB/s): min= 472, max= 1120, per=4.35%, avg=808.15, stdev=164.00, samples=20 00:28:00.363 iops : min= 118, max= 280, avg=202.00, stdev=40.95, samples=20 00:28:00.363 lat (msec) : 50=12.96%, 100=64.46%, 250=22.58% 00:28:00.363 cpu : usr=35.47%, sys=0.94%, ctx=1205, majf=0, minf=9 00:28:00.363 IO depths : 1=1.3%, 2=2.8%, 4=10.2%, 8=73.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:00.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 issued rwts: total=2037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.363 filename2: (groupid=0, jobs=1): err= 0: pid=109234: Wed Nov 20 09:21:37 2024 00:28:00.363 read: IOPS=228, BW=915KiB/s (937kB/s)(9176KiB/10029msec) 00:28:00.363 slat (usec): min=4, max=8027, avg=17.92, stdev=205.17 00:28:00.363 clat (msec): min=3, max=148, avg=69.77, stdev=28.01 00:28:00.363 lat (msec): min=3, max=148, avg=69.79, stdev=28.01 00:28:00.363 clat percentiles (msec): 00:28:00.363 | 1.00th=[ 4], 5.00th=[ 24], 10.00th=[ 42], 20.00th=[ 48], 00:28:00.363 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 73], 00:28:00.363 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 114], 00:28:00.363 | 99.00th=[ 138], 99.50th=[ 148], 99.90th=[ 148], 99.95th=[ 148], 00:28:00.363 | 99.99th=[ 148] 00:28:00.363 bw ( KiB/s): min= 560, max= 1781, per=4.92%, avg=913.05, stdev=278.23, samples=20 00:28:00.363 iops : min= 140, max= 445, avg=228.25, stdev=69.52, samples=20 00:28:00.363 lat (msec) : 4=2.09%, 10=2.09%, 20=0.70%, 50=22.10%, 100=57.50% 00:28:00.363 lat (msec) : 250=15.52% 00:28:00.363 cpu : usr=42.07%, sys=1.05%, ctx=1145, majf=0, minf=0 00:28:00.363 IO depths : 1=1.3%, 2=3.2%, 4=10.8%, 8=72.6%, 16=12.1%, 32=0.0%, >=64=0.0% 00:28:00.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 issued rwts: total=2294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.363 filename2: (groupid=0, jobs=1): err= 0: pid=109235: Wed Nov 20 09:21:37 2024 00:28:00.363 read: IOPS=183, BW=735KiB/s (752kB/s)(7348KiB/10004msec) 00:28:00.363 slat (usec): min=4, max=8036, avg=19.86, stdev=264.53 00:28:00.363 clat (msec): min=9, max=185, avg=87.05, stdev=28.54 00:28:00.363 lat (msec): min=9, max=185, avg=87.07, stdev=28.54 00:28:00.363 clat percentiles (msec): 00:28:00.363 | 1.00th=[ 13], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 64], 00:28:00.363 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:28:00.363 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 133], 00:28:00.363 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 186], 99.95th=[ 186], 00:28:00.363 | 99.99th=[ 186] 00:28:00.363 bw ( KiB/s): min= 512, max= 992, per=3.83%, avg=710.32, stdev=148.49, samples=19 00:28:00.363 iops : min= 128, max= 248, avg=177.58, stdev=37.12, samples=19 00:28:00.363 lat (msec) : 10=0.33%, 20=1.74%, 50=7.84%, 100=60.86%, 250=29.23% 00:28:00.363 cpu : usr=32.29%, sys=0.97%, ctx=903, majf=0, minf=9 00:28:00.363 IO depths : 1=1.1%, 2=2.3%, 4=9.3%, 8=74.5%, 16=12.8%, 32=0.0%, >=64=0.0% 00:28:00.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 complete : 0=0.0%, 4=89.8%, 8=6.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.363 filename2: (groupid=0, jobs=1): err= 0: pid=109236: Wed Nov 20 09:21:37 2024 00:28:00.363 read: IOPS=220, BW=880KiB/s (901kB/s)(8836KiB/10040msec) 00:28:00.363 slat (usec): min=5, max=4021, avg=15.91, stdev=138.64 00:28:00.363 clat (msec): min=12, max=189, avg=72.55, stdev=26.10 00:28:00.363 lat (msec): min=12, max=189, avg=72.57, stdev=26.11 00:28:00.363 clat percentiles (msec): 00:28:00.363 | 1.00th=[ 16], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 51], 00:28:00.363 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 77], 00:28:00.363 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 117], 00:28:00.363 | 99.00th=[ 155], 99.50th=[ 184], 99.90th=[ 190], 99.95th=[ 190], 00:28:00.363 | 99.99th=[ 190] 00:28:00.363 bw ( KiB/s): min= 432, max= 1248, per=4.72%, avg=876.85, stdev=191.46, samples=20 00:28:00.363 iops : min= 108, max= 312, avg=219.20, stdev=47.87, samples=20 00:28:00.363 lat (msec) : 20=1.45%, 50=17.16%, 100=68.45%, 250=12.95% 00:28:00.363 cpu : usr=40.78%, sys=1.01%, ctx=1442, majf=0, minf=9 00:28:00.363 IO depths : 1=1.0%, 2=2.3%, 4=9.1%, 8=74.9%, 16=12.7%, 32=0.0%, >=64=0.0% 00:28:00.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.363 filename2: (groupid=0, jobs=1): err= 0: pid=109237: Wed Nov 20 09:21:37 2024 00:28:00.363 read: IOPS=190, BW=762KiB/s (780kB/s)(7652KiB/10040msec) 00:28:00.363 slat (nsec): min=6577, max=38991, avg=11113.78, stdev=3771.73 00:28:00.363 clat (msec): min=29, max=179, avg=83.86, stdev=26.10 00:28:00.363 lat (msec): min=29, max=179, avg=83.87, stdev=26.10 00:28:00.363 clat percentiles (msec): 00:28:00.363 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 64], 00:28:00.363 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 85], 00:28:00.363 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 111], 95.00th=[ 144], 00:28:00.363 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:28:00.363 | 99.99th=[ 180] 00:28:00.363 bw ( KiB/s): min= 512, max= 992, per=4.09%, avg=758.80, stdev=122.08, samples=20 00:28:00.363 iops : min= 128, max= 248, avg=189.70, stdev=30.52, samples=20 00:28:00.363 lat (msec) : 50=8.68%, 100=68.32%, 250=23.00% 00:28:00.363 cpu : usr=32.51%, sys=0.73%, ctx=937, majf=0, minf=9 00:28:00.363 IO depths : 1=1.3%, 2=2.8%, 4=9.7%, 8=73.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:28:00.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.363 issued rwts: total=1913,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.363 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.363 filename2: (groupid=0, jobs=1): err= 0: pid=109238: Wed Nov 20 09:21:37 2024 00:28:00.363 read: IOPS=172, BW=691KiB/s (708kB/s)(6912KiB/10001msec) 00:28:00.363 slat (usec): min=4, max=8020, avg=16.06, stdev=192.73 00:28:00.363 clat (msec): min=2, max=183, avg=92.49, stdev=32.18 00:28:00.363 lat (msec): min=2, max=183, avg=92.50, stdev=32.18 00:28:00.363 clat percentiles (msec): 00:28:00.363 | 1.00th=[ 6], 5.00th=[ 45], 10.00th=[ 57], 20.00th=[ 70], 00:28:00.363 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 96], 60.00th=[ 105], 00:28:00.363 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 131], 95.00th=[ 153], 00:28:00.363 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 184], 99.95th=[ 184], 00:28:00.363 | 99.99th=[ 184] 00:28:00.364 bw ( KiB/s): min= 512, max= 976, per=3.52%, avg=653.37, stdev=132.93, samples=19 00:28:00.364 iops : min= 128, max= 244, avg=163.32, stdev=33.22, samples=19 00:28:00.364 lat (msec) : 4=0.93%, 10=1.85%, 20=0.93%, 50=3.99%, 100=45.25% 00:28:00.364 lat (msec) : 250=47.05% 00:28:00.364 cpu : usr=38.35%, sys=1.06%, ctx=1149, majf=0, minf=9 00:28:00.364 IO depths : 1=3.7%, 2=8.2%, 4=20.0%, 8=59.3%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:00.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.364 complete : 0=0.0%, 4=92.7%, 8=1.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.364 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.364 filename2: (groupid=0, jobs=1): err= 0: pid=109239: Wed Nov 20 09:21:37 2024 00:28:00.364 read: IOPS=181, BW=724KiB/s (741kB/s)(7252KiB/10016msec) 00:28:00.364 slat (usec): min=6, max=4019, avg=15.90, stdev=133.17 00:28:00.364 clat (msec): min=21, max=180, avg=88.32, stdev=29.87 00:28:00.364 lat (msec): min=21, max=180, avg=88.33, stdev=29.87 00:28:00.364 clat percentiles (msec): 00:28:00.364 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 57], 00:28:00.364 | 30.00th=[ 68], 40.00th=[ 74], 50.00th=[ 88], 60.00th=[ 101], 00:28:00.364 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 128], 95.00th=[ 138], 00:28:00.364 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 182], 00:28:00.364 | 99.99th=[ 182] 00:28:00.364 bw ( KiB/s): min= 512, max= 1128, per=3.83%, avg=711.26, stdev=195.66, samples=19 00:28:00.364 iops : min= 128, max= 282, avg=177.74, stdev=48.88, samples=19 00:28:00.364 lat (msec) : 50=9.82%, 100=49.70%, 250=40.49% 00:28:00.364 cpu : usr=38.49%, sys=0.93%, ctx=1310, majf=0, minf=9 00:28:00.364 IO depths : 1=0.4%, 2=1.2%, 4=6.5%, 8=77.2%, 16=14.7%, 32=0.0%, >=64=0.0% 00:28:00.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.364 complete : 0=0.0%, 4=89.9%, 8=6.9%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:00.364 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:00.364 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:00.364 00:28:00.364 Run status group 0 (all jobs): 00:28:00.364 READ: bw=18.1MiB/s (19.0MB/s), 683KiB/s-915KiB/s (699kB/s-937kB/s), io=182MiB (191MB), run=10001-10054msec 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 bdev_null0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.364 [2024-11-20 09:21:37.851952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:00.364 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.365 bdev_null1 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:00.365 { 00:28:00.365 "params": { 00:28:00.365 "name": "Nvme$subsystem", 00:28:00.365 "trtype": "$TEST_TRANSPORT", 00:28:00.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.365 "adrfam": "ipv4", 00:28:00.365 "trsvcid": "$NVMF_PORT", 00:28:00.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.365 "hdgst": ${hdgst:-false}, 00:28:00.365 "ddgst": ${ddgst:-false} 00:28:00.365 }, 00:28:00.365 "method": "bdev_nvme_attach_controller" 00:28:00.365 } 00:28:00.365 EOF 00:28:00.365 )") 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:00.365 { 00:28:00.365 "params": { 00:28:00.365 "name": "Nvme$subsystem", 00:28:00.365 "trtype": "$TEST_TRANSPORT", 00:28:00.365 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.365 "adrfam": "ipv4", 00:28:00.365 "trsvcid": "$NVMF_PORT", 00:28:00.365 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.365 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.365 "hdgst": ${hdgst:-false}, 00:28:00.365 "ddgst": ${ddgst:-false} 00:28:00.365 }, 00:28:00.365 "method": "bdev_nvme_attach_controller" 00:28:00.365 } 00:28:00.365 EOF 00:28:00.365 )") 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:00.365 "params": { 00:28:00.365 "name": "Nvme0", 00:28:00.365 "trtype": "tcp", 00:28:00.365 "traddr": "10.0.0.2", 00:28:00.365 "adrfam": "ipv4", 00:28:00.365 "trsvcid": "4420", 00:28:00.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:00.365 "hdgst": false, 00:28:00.365 "ddgst": false 00:28:00.365 }, 00:28:00.365 "method": "bdev_nvme_attach_controller" 00:28:00.365 },{ 00:28:00.365 "params": { 00:28:00.365 "name": "Nvme1", 00:28:00.365 "trtype": "tcp", 00:28:00.365 "traddr": "10.0.0.2", 00:28:00.365 "adrfam": "ipv4", 00:28:00.365 "trsvcid": "4420", 00:28:00.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:00.365 "hdgst": false, 00:28:00.365 "ddgst": false 00:28:00.365 }, 00:28:00.365 "method": "bdev_nvme_attach_controller" 00:28:00.365 }' 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:00.365 09:21:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.365 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:00.365 ... 00:28:00.365 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:00.365 ... 00:28:00.365 fio-3.35 00:28:00.365 Starting 4 threads 00:28:05.632 00:28:05.632 filename0: (groupid=0, jobs=1): err= 0: pid=109360: Wed Nov 20 09:21:43 2024 00:28:05.632 read: IOPS=1972, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5002msec) 00:28:05.632 slat (nsec): min=4064, max=56810, avg=8646.03, stdev=2226.19 00:28:05.632 clat (usec): min=2971, max=7145, avg=4011.08, stdev=149.45 00:28:05.632 lat (usec): min=2983, max=7153, avg=4019.72, stdev=149.55 00:28:05.632 clat percentiles (usec): 00:28:05.632 | 1.00th=[ 3916], 5.00th=[ 3949], 10.00th=[ 3949], 20.00th=[ 3982], 00:28:05.632 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 4015], 00:28:05.632 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:28:05.632 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 5669], 99.95th=[ 6063], 00:28:05.632 | 99.99th=[ 7177] 00:28:05.632 bw ( KiB/s): min=15232, max=15872, per=25.00%, avg=15775.89, stdev=209.84, samples=9 00:28:05.632 iops : min= 1904, max= 1984, avg=1971.89, stdev=26.23, samples=9 00:28:05.632 lat (msec) : 4=55.63%, 10=44.37% 00:28:05.632 cpu : usr=94.50%, sys=4.36%, ctx=27, majf=0, minf=0 00:28:05.632 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.632 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.632 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.632 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.632 filename0: (groupid=0, jobs=1): err= 0: pid=109361: Wed Nov 20 09:21:43 2024 00:28:05.632 read: IOPS=1972, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5002msec) 00:28:05.632 slat (nsec): min=4785, max=38273, avg=12040.98, stdev=3923.58 00:28:05.632 clat (usec): min=2934, max=7162, avg=3992.07, stdev=146.92 00:28:05.632 lat (usec): min=2941, max=7170, avg=4004.11, stdev=147.13 00:28:05.632 clat percentiles (usec): 00:28:05.632 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3916], 20.00th=[ 3949], 00:28:05.632 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 3982], 00:28:05.632 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4047], 00:28:05.632 | 99.00th=[ 4555], 99.50th=[ 5080], 99.90th=[ 6063], 99.95th=[ 6063], 00:28:05.632 | 99.99th=[ 7177] 00:28:05.632 bw ( KiB/s): min=15360, max=15872, per=24.99%, avg=15772.44, stdev=166.62, samples=9 00:28:05.632 iops : min= 1920, max= 1984, avg=1971.56, stdev=20.83, samples=9 00:28:05.632 lat (msec) : 4=71.48%, 10=28.52% 00:28:05.632 cpu : usr=94.08%, sys=4.78%, ctx=4, majf=0, minf=1 00:28:05.632 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.632 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.632 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.632 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.632 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.633 filename1: (groupid=0, jobs=1): err= 0: pid=109362: Wed Nov 20 09:21:43 2024 00:28:05.633 read: IOPS=1972, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5002msec) 00:28:05.633 slat (nsec): min=3898, max=28535, avg=8510.31, stdev=1922.13 00:28:05.633 clat (usec): min=2005, max=7835, avg=4011.93, stdev=174.57 00:28:05.633 lat (usec): min=2012, max=7842, avg=4020.44, stdev=174.62 00:28:05.633 clat percentiles (usec): 00:28:05.633 | 1.00th=[ 3949], 5.00th=[ 3949], 10.00th=[ 3949], 20.00th=[ 3982], 00:28:05.633 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 4015], 00:28:05.633 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:28:05.633 | 99.00th=[ 4359], 99.50th=[ 5211], 99.90th=[ 6390], 99.95th=[ 6980], 00:28:05.633 | 99.99th=[ 7832] 00:28:05.633 bw ( KiB/s): min=15232, max=15872, per=24.99%, avg=15772.44, stdev=210.11, samples=9 00:28:05.633 iops : min= 1904, max= 1984, avg=1971.56, stdev=26.26, samples=9 00:28:05.633 lat (msec) : 4=54.82%, 10=45.18% 00:28:05.633 cpu : usr=94.40%, sys=4.52%, ctx=7, majf=0, minf=0 00:28:05.633 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.633 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.633 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.633 filename1: (groupid=0, jobs=1): err= 0: pid=109363: Wed Nov 20 09:21:43 2024 00:28:05.633 read: IOPS=1972, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5001msec) 00:28:05.633 slat (nsec): min=7732, max=50294, avg=12301.15, stdev=4088.03 00:28:05.633 clat (usec): min=3016, max=7174, avg=3997.86, stdev=139.39 00:28:05.633 lat (usec): min=3029, max=7183, avg=4010.16, stdev=138.76 00:28:05.633 clat percentiles (usec): 00:28:05.633 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3916], 20.00th=[ 3949], 00:28:05.633 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 3982], 60.00th=[ 3982], 00:28:05.633 | 70.00th=[ 4015], 80.00th=[ 4015], 90.00th=[ 4047], 95.00th=[ 4080], 00:28:05.633 | 99.00th=[ 4555], 99.50th=[ 5014], 99.90th=[ 5735], 99.95th=[ 5735], 00:28:05.633 | 99.99th=[ 7177] 00:28:05.633 bw ( KiB/s): min=15360, max=15872, per=25.02%, avg=15786.67, stdev=169.33, samples=9 00:28:05.633 iops : min= 1920, max= 1984, avg=1973.33, stdev=21.17, samples=9 00:28:05.633 lat (msec) : 4=62.94%, 10=37.06% 00:28:05.633 cpu : usr=94.12%, sys=4.64%, ctx=9, majf=0, minf=0 00:28:05.633 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.633 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.633 issued rwts: total=9864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.633 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:05.633 00:28:05.633 Run status group 0 (all jobs): 00:28:05.633 READ: bw=61.6MiB/s (64.6MB/s), 15.4MiB/s-15.4MiB/s (16.2MB/s-16.2MB/s), io=308MiB (323MB), run=5001-5002msec 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 ************************************ 00:28:05.633 END TEST fio_dif_rand_params 00:28:05.633 ************************************ 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 00:28:05.633 real 0m23.997s 00:28:05.633 user 2m7.041s 00:28:05.633 sys 0m5.238s 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 09:21:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:05.633 09:21:44 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.633 09:21:44 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 ************************************ 00:28:05.633 START TEST fio_dif_digest 00:28:05.633 ************************************ 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 bdev_null0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.633 [2024-11-20 09:21:44.189976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:05.633 { 00:28:05.633 "params": { 00:28:05.633 "name": "Nvme$subsystem", 00:28:05.633 "trtype": "$TEST_TRANSPORT", 00:28:05.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.633 "adrfam": "ipv4", 00:28:05.633 "trsvcid": "$NVMF_PORT", 00:28:05.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.633 "hdgst": ${hdgst:-false}, 00:28:05.633 "ddgst": ${ddgst:-false} 00:28:05.633 }, 00:28:05.633 "method": "bdev_nvme_attach_controller" 00:28:05.633 } 00:28:05.633 EOF 00:28:05.633 )") 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.633 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:05.634 "params": { 00:28:05.634 "name": "Nvme0", 00:28:05.634 "trtype": "tcp", 00:28:05.634 "traddr": "10.0.0.2", 00:28:05.634 "adrfam": "ipv4", 00:28:05.634 "trsvcid": "4420", 00:28:05.634 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:05.634 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:05.634 "hdgst": true, 00:28:05.634 "ddgst": true 00:28:05.634 }, 00:28:05.634 "method": "bdev_nvme_attach_controller" 00:28:05.634 }' 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:05.634 09:21:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.634 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:05.634 ... 00:28:05.634 fio-3.35 00:28:05.634 Starting 3 threads 00:28:17.836 00:28:17.836 filename0: (groupid=0, jobs=1): err= 0: pid=109468: Wed Nov 20 09:21:55 2024 00:28:17.836 read: IOPS=253, BW=31.6MiB/s (33.2MB/s)(317MiB/10007msec) 00:28:17.836 slat (nsec): min=8038, max=53545, avg=13118.38, stdev=2199.97 00:28:17.836 clat (usec): min=8920, max=54023, avg=11840.15, stdev=2929.30 00:28:17.836 lat (usec): min=8933, max=54034, avg=11853.27, stdev=2929.29 00:28:17.836 clat percentiles (usec): 00:28:17.836 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:28:17.836 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:28:17.836 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:28:17.836 | 99.00th=[13829], 99.50th=[14746], 99.90th=[53216], 99.95th=[53216], 00:28:17.836 | 99.99th=[54264] 00:28:17.836 bw ( KiB/s): min=29696, max=33792, per=38.82%, avg=32336.84, stdev=1145.03, samples=19 00:28:17.836 iops : min= 232, max= 264, avg=252.63, stdev= 8.95, samples=19 00:28:17.836 lat (msec) : 10=3.63%, 20=95.89%, 100=0.47% 00:28:17.836 cpu : usr=92.85%, sys=5.63%, ctx=23, majf=0, minf=9 00:28:17.836 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.836 issued rwts: total=2532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:17.836 filename0: (groupid=0, jobs=1): err= 0: pid=109469: Wed Nov 20 09:21:55 2024 00:28:17.836 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(268MiB/10004msec) 00:28:17.836 slat (nsec): min=7900, max=53377, avg=13241.06, stdev=3259.16 00:28:17.836 clat (usec): min=6591, max=17830, avg=13999.65, stdev=1423.00 00:28:17.836 lat (usec): min=6599, max=17884, avg=14012.89, stdev=1422.88 00:28:17.836 clat percentiles (usec): 00:28:17.836 | 1.00th=[ 8455], 5.00th=[11863], 10.00th=[12518], 20.00th=[13042], 00:28:17.836 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14353], 00:28:17.836 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15664], 95.00th=[15926], 00:28:17.836 | 99.00th=[16450], 99.50th=[16712], 99.90th=[16909], 99.95th=[17171], 00:28:17.836 | 99.99th=[17957] 00:28:17.836 bw ( KiB/s): min=26112, max=28928, per=32.87%, avg=27378.53, stdev=741.34, samples=19 00:28:17.836 iops : min= 204, max= 226, avg=213.89, stdev= 5.79, samples=19 00:28:17.836 lat (msec) : 10=2.43%, 20=97.57% 00:28:17.836 cpu : usr=92.98%, sys=5.69%, ctx=17, majf=0, minf=0 00:28:17.836 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.836 issued rwts: total=2141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.836 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:17.836 filename0: (groupid=0, jobs=1): err= 0: pid=109470: Wed Nov 20 09:21:55 2024 00:28:17.836 read: IOPS=183, BW=23.0MiB/s (24.1MB/s)(230MiB/10003msec) 00:28:17.836 slat (nsec): min=7835, max=43165, avg=12066.20, stdev=4549.50 00:28:17.836 clat (usec): min=8667, max=19269, avg=16290.24, stdev=1223.23 00:28:17.836 lat (usec): min=8675, max=19277, avg=16302.31, stdev=1223.49 00:28:17.836 clat percentiles (usec): 00:28:17.836 | 1.00th=[10028], 5.00th=[15139], 10.00th=[15401], 20.00th=[15795], 00:28:17.836 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16450], 60.00th=[16581], 00:28:17.836 | 70.00th=[16909], 80.00th=[16909], 90.00th=[17433], 95.00th=[17695], 00:28:17.836 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[19268], 00:28:17.836 | 99.99th=[19268] 00:28:17.836 bw ( KiB/s): min=22784, max=24576, per=28.29%, avg=23565.47, stdev=695.73, samples=19 00:28:17.836 iops : min= 178, max= 192, avg=184.11, stdev= 5.44, samples=19 00:28:17.836 lat (msec) : 10=0.98%, 20=99.02% 00:28:17.836 cpu : usr=92.97%, sys=5.77%, ctx=9, majf=0, minf=0 00:28:17.837 IO depths : 1=25.1%, 2=74.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:17.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:17.837 issued rwts: total=1839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:17.837 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:17.837 00:28:17.837 Run status group 0 (all jobs): 00:28:17.837 READ: bw=81.3MiB/s (85.3MB/s), 23.0MiB/s-31.6MiB/s (24.1MB/s-33.2MB/s), io=814MiB (854MB), run=10003-10007msec 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:17.837 ************************************ 00:28:17.837 END TEST fio_dif_digest 00:28:17.837 ************************************ 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.837 00:28:17.837 real 0m11.068s 00:28:17.837 user 0m28.581s 00:28:17.837 sys 0m2.001s 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.837 09:21:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:17.837 09:21:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:17.837 09:21:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:17.837 rmmod nvme_tcp 00:28:17.837 rmmod nvme_fabrics 00:28:17.837 rmmod nvme_keyring 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 108727 ']' 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 108727 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 108727 ']' 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 108727 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108727 00:28:17.837 killing process with pid 108727 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108727' 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@973 -- # kill 108727 00:28:17.837 09:21:55 nvmf_dif -- common/autotest_common.sh@978 -- # wait 108727 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:28:17.837 09:21:55 nvmf_dif -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:17.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:17.837 Waiting for block devices as requested 00:28:17.837 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.837 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:17.837 09:21:56 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:17.837 09:21:56 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:28:17.837 09:21:56 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@261 -- # continue 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:28:17.837 09:21:56 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:28:17.837 09:21:56 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:28:17.837 09:21:56 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:28:17.837 09:21:56 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:17.837 00:28:17.837 real 1m0.444s 00:28:17.837 user 3m53.227s 00:28:17.837 sys 0m14.758s 00:28:17.837 09:21:56 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:17.837 09:21:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:17.837 ************************************ 00:28:17.837 END TEST nvmf_dif 00:28:17.837 ************************************ 00:28:17.837 09:21:56 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:17.837 09:21:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:17.837 09:21:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:17.837 09:21:56 -- common/autotest_common.sh@10 -- # set +x 00:28:17.837 ************************************ 00:28:17.837 START TEST nvmf_abort_qd_sizes 00:28:17.837 ************************************ 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:17.837 * Looking for test storage... 00:28:17.837 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.837 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:17.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.838 --rc genhtml_branch_coverage=1 00:28:17.838 --rc genhtml_function_coverage=1 00:28:17.838 --rc genhtml_legend=1 00:28:17.838 --rc geninfo_all_blocks=1 00:28:17.838 --rc geninfo_unexecuted_blocks=1 00:28:17.838 00:28:17.838 ' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:17.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.838 --rc genhtml_branch_coverage=1 00:28:17.838 --rc genhtml_function_coverage=1 00:28:17.838 --rc genhtml_legend=1 00:28:17.838 --rc geninfo_all_blocks=1 00:28:17.838 --rc geninfo_unexecuted_blocks=1 00:28:17.838 00:28:17.838 ' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:17.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.838 --rc genhtml_branch_coverage=1 00:28:17.838 --rc genhtml_function_coverage=1 00:28:17.838 --rc genhtml_legend=1 00:28:17.838 --rc geninfo_all_blocks=1 00:28:17.838 --rc geninfo_unexecuted_blocks=1 00:28:17.838 00:28:17.838 ' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:17.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.838 --rc genhtml_branch_coverage=1 00:28:17.838 --rc genhtml_function_coverage=1 00:28:17.838 --rc genhtml_legend=1 00:28:17.838 --rc geninfo_all_blocks=1 00:28:17.838 --rc geninfo_unexecuted_blocks=1 00:28:17.838 00:28:17.838 ' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:17.838 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ virt != virt ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ no == yes ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # [[ virt == phy ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ virt == phy-fallback ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # [[ tcp == tcp ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@280 -- # nvmf_veth_init 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local total_initiator_target_pairs=2 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@223 -- # create_target_ns 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # create_main_bridge 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@105 -- # delete_main_bridge 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # ip link add nvmf_br type bridge 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@108 -- # set_up nvmf_br 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=nvmf_br in_ns= 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set nvmf_br up' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set nvmf_br up 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@225 -- # setup_interfaces 2 veth 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=2 type=veth transport=tcp ip_pool=0x0a000001 max 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 veth 167772161 tcp 00:28:17.838 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=veth ip=167772161 transport=tcp ips 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator0 initiator0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator0 peer=initiator0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator0 type veth peer name initiator0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator0 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target0 target0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target0 peer=target0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target0 type veth peer name target0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target0 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns= 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0 up' 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0 up 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target0_br 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target0 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target0 ns=nvmf_ns_spdk 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target0 netns nvmf_ns_spdk 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator0 167772161 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator0 ip=167772161 in_ns= 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev initiator0' 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev initiator0 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/initiator0/ifalias' 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator0/ifalias 00:28:17.839 10.0.0.1 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target0 167772162 NVMF_TARGET_NS_CMD 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target0 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0' 00:28:17.839 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev target0 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target0/ifalias 00:28:18.100 10.0.0.2 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator0 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0 in_ns= 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0 up' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0 up 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target0 NVMF_TARGET_NS_CMD 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target0 up' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target0 up 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator0_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator0_br bridge=nvmf_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator0_br master nvmf_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator0_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator0_br in_ns= 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator0_br up' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator0_br up 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target0_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target0_br bridge=nvmf_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target0_br master nvmf_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target0_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target0_br in_ns= 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target0_br up' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target0_br up 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator0 -p tcp --dport 4420 -j ACCEPT' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator0 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target0 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 1 veth 167772163 tcp 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=1 type=veth ip=167772163 transport=tcp ips 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator1 target=target1 _ns= 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ veth == phy ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ veth == veth ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # create_veth initiator1 initiator1_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=initiator1 peer=initiator1_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add initiator1 type veth peer name initiator1_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up initiator1 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up initiator1_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ veth == veth ]] 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # create_veth target1 target1_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # local dev=target1 peer=target1_br 00:28:18.100 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@149 -- # ip link add target1 type veth peer name target1_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@151 -- # set_up target1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns= 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1 up' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1 up 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # set_up target1_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns target1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=target1 ns=nvmf_ns_spdk 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set target1 netns nvmf_ns_spdk 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip initiator1 167772163 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=initiator1 ip=167772163 in_ns= 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772163 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772163 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 3 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.3 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.3/24 dev initiator1' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.3/24 dev initiator1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.3 | tee /sys/class/net/initiator1/ifalias' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.3 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/initiator1/ifalias 00:28:18.101 10.0.0.3 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip target1 167772164 NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=target1 ip=167772164 in_ns=NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772164 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772164 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 4 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.4 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.4/24 dev target1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.4 | ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.4 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/target1/ifalias 00:28:18.101 10.0.0.4 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up initiator1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1 in_ns= 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1 up' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1 up 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up target1 NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set target1 up' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set target1 up 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ veth == veth ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # add_to_bridge initiator1_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=initiator1_br bridge=nvmf_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set initiator1_br master nvmf_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up initiator1_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=initiator1_br in_ns= 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set initiator1_br up' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set initiator1_br up 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ veth == veth ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_bridge target1_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@126 -- # local dev=target1_br bridge=nvmf_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@127 -- # ip link set target1_br master nvmf_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@129 -- # set_up target1_br 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=target1_br in_ns= 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set target1_br up' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set target1_br up 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i initiator1 -p tcp --dport 4420 -j ACCEPT' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=initiator1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=target1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 2 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=2 pair 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:18.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:28:18.101 00:28:18.101 --- 10.0.0.1 ping statistics --- 00:28:18.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.101 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:28:18.101 09:21:56 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:28:18.101 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:18.101 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:18.101 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:18.101 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:18.101 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:18.101 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:18.101 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:18.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:28:18.102 00:28:18.102 --- 10.0.0.2 ping statistics --- 00:28:18.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.102 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator1 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:28:18.102 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.3 NVMF_TARGET_NS_CMD 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.3 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3' 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.3 00:28:18.364 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:18.364 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:28:18.364 00:28:18.364 --- 10.0.0.3 ping statistics --- 00:28:18.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.364 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.364 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.4 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.4 in_ns= count=1 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.4' 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.4 00:28:18.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:18.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.099 ms 00:28:18.365 00:28:18.365 --- 10.0.0.4 ping statistics --- 00:28:18.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.365 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # return 0 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:28:18.365 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:18.959 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:18.959 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:18.959 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=target0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2=target1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo initiator1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=initiator1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator1/ifalias' 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator1/ifalias 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.3 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.3 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.3 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.3 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target0 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias' 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target0/ifalias 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo target1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=target1 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias' 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/target1/ifalias 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.4 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.4 ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.4 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:19.218 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.219 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:19.219 09:21:57 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=110109 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 110109 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 110109 ']' 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.219 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 [2024-11-20 09:21:58.075033] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:28:19.219 [2024-11-20 09:21:58.075151] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.477 [2024-11-20 09:21:58.233345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:19.477 [2024-11-20 09:21:58.316629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.477 [2024-11-20 09:21:58.317021] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.477 [2024-11-20 09:21:58.317201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.477 [2024-11-20 09:21:58.317519] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.477 [2024-11-20 09:21:58.317676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.477 [2024-11-20 09:21:58.319213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.477 [2024-11-20 09:21:58.319297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.477 [2024-11-20 09:21:58.319405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.477 [2024-11-20 09:21:58.319409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:19.735 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:19.736 09:21:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:19.736 ************************************ 00:28:19.736 START TEST spdk_target_abort 00:28:19.736 ************************************ 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.736 spdk_targetn1 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.736 [2024-11-20 09:21:58.629464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.736 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:19.994 [2024-11-20 09:21:58.661675] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:19.994 09:21:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.275 Initializing NVMe Controllers 00:28:23.275 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:23.275 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:23.275 Initialization complete. Launching workers. 00:28:23.275 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11552, failed: 0 00:28:23.275 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1039, failed to submit 10513 00:28:23.275 success 719, unsuccessful 320, failed 0 00:28:23.275 09:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.275 09:22:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:26.607 Initializing NVMe Controllers 00:28:26.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:26.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:26.607 Initialization complete. Launching workers. 00:28:26.607 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6017, failed: 0 00:28:26.607 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1273, failed to submit 4744 00:28:26.607 success 210, unsuccessful 1063, failed 0 00:28:26.607 09:22:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:26.607 09:22:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.893 Initializing NVMe Controllers 00:28:29.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:29.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:29.893 Initialization complete. Launching workers. 00:28:29.893 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29697, failed: 0 00:28:29.893 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2642, failed to submit 27055 00:28:29.893 success 390, unsuccessful 2252, failed 0 00:28:29.893 09:22:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:29.893 09:22:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.893 09:22:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:29.893 09:22:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.893 09:22:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:29.893 09:22:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.893 09:22:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:30.918 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 110109 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 110109 ']' 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 110109 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110109 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:30.919 killing process with pid 110109 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110109' 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 110109 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 110109 00:28:30.919 ************************************ 00:28:30.919 END TEST spdk_target_abort 00:28:30.919 ************************************ 00:28:30.919 00:28:30.919 real 0m11.258s 00:28:30.919 user 0m43.751s 00:28:30.919 sys 0m1.680s 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:30.919 09:22:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:31.177 09:22:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:31.177 09:22:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:31.177 09:22:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:31.177 09:22:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:31.177 ************************************ 00:28:31.177 START TEST kernel_target_abort 00:28:31.177 ************************************ 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo initiator0 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=initiator0 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/initiator0/ifalias' 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/initiator0/ifalias 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:31.178 09:22:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:31.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:31.437 Waiting for block devices as requested 00:28:31.437 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:31.696 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:31.696 No valid GPT data, bailing 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n2 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:31.696 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:31.697 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:31.697 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n2 00:28:31.697 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:31.697 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:31.956 No valid GPT data, bailing 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n2 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n3 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n3 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:31.956 No valid GPT data, bailing 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n3 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme1n1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme1n1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:31.956 No valid GPT data, bailing 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme1n1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme1n1 ]] 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme1n1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 --hostid=44ab6922-625f-4dd5-abd7-64d78c556468 -a 10.0.0.1 -t tcp -s 4420 00:28:31.956 00:28:31.956 Discovery Log Number of Records 2, Generation counter 2 00:28:31.956 =====Discovery Log Entry 0====== 00:28:31.956 trtype: tcp 00:28:31.956 adrfam: ipv4 00:28:31.956 subtype: current discovery subsystem 00:28:31.956 treq: not specified, sq flow control disable supported 00:28:31.956 portid: 1 00:28:31.956 trsvcid: 4420 00:28:31.956 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:31.956 traddr: 10.0.0.1 00:28:31.956 eflags: none 00:28:31.956 sectype: none 00:28:31.956 =====Discovery Log Entry 1====== 00:28:31.956 trtype: tcp 00:28:31.956 adrfam: ipv4 00:28:31.956 subtype: nvme subsystem 00:28:31.956 treq: not specified, sq flow control disable supported 00:28:31.956 portid: 1 00:28:31.956 trsvcid: 4420 00:28:31.956 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:31.956 traddr: 10.0.0.1 00:28:31.956 eflags: none 00:28:31.956 sectype: none 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.956 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:31.957 09:22:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:35.244 Initializing NVMe Controllers 00:28:35.244 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:35.244 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:35.244 Initialization complete. Launching workers. 00:28:35.244 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35291, failed: 0 00:28:35.244 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35291, failed to submit 0 00:28:35.244 success 0, unsuccessful 35291, failed 0 00:28:35.244 09:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:35.244 09:22:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:38.558 Initializing NVMe Controllers 00:28:38.558 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:38.558 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:38.558 Initialization complete. Launching workers. 00:28:38.558 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69346, failed: 0 00:28:38.558 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30223, failed to submit 39123 00:28:38.558 success 0, unsuccessful 30223, failed 0 00:28:38.558 09:22:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:38.558 09:22:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.840 Initializing NVMe Controllers 00:28:41.840 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:41.840 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:41.840 Initialization complete. Launching workers. 00:28:41.840 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79241, failed: 0 00:28:41.840 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19782, failed to submit 59459 00:28:41.840 success 0, unsuccessful 19782, failed 0 00:28:41.840 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:41.840 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:28:41.841 09:22:20 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:42.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:44.308 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:44.308 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:44.308 ************************************ 00:28:44.308 END TEST kernel_target_abort 00:28:44.308 ************************************ 00:28:44.308 00:28:44.308 real 0m13.323s 00:28:44.308 user 0m6.467s 00:28:44.308 sys 0m4.277s 00:28:44.308 09:22:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.308 09:22:23 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:44.566 rmmod nvme_tcp 00:28:44.566 rmmod nvme_fabrics 00:28:44.566 rmmod nvme_keyring 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:44.566 Process with pid 110109 is not found 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 110109 ']' 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 110109 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 110109 ']' 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 110109 00:28:44.566 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (110109) - No such process 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 110109 is not found' 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:28:44.566 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:44.824 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:44.824 Waiting for block devices as requested 00:28:44.824 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:45.082 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@122 -- # delete_dev nvmf_br 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=nvmf_br in_ns= 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete nvmf_br' 00:28:45.082 09:22:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete nvmf_br 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator0/address ]] 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator0 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator0 in_ns= 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator0' 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator0 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/initiator1/address ]] 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 3 == 3 )) 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@266 -- # delete_dev initiator1 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@114 -- # local dev=initiator1 in_ns= 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@115 -- # [[ -n '' ]] 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # eval ' ip link delete initiator1' 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@117 -- # ip link delete initiator1 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target0/address ]] 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/target1/address ]] 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # continue 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:28:45.341 00:28:45.341 real 0m27.615s 00:28:45.341 user 0m51.431s 00:28:45.341 sys 0m7.443s 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:45.341 ************************************ 00:28:45.341 END TEST nvmf_abort_qd_sizes 00:28:45.341 ************************************ 00:28:45.341 09:22:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:45.341 09:22:24 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:45.341 09:22:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:45.341 09:22:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:45.341 09:22:24 -- common/autotest_common.sh@10 -- # set +x 00:28:45.341 ************************************ 00:28:45.341 START TEST keyring_file 00:28:45.341 ************************************ 00:28:45.341 09:22:24 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:45.341 * Looking for test storage... 00:28:45.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:45.341 09:22:24 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:45.341 09:22:24 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:45.341 09:22:24 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:28:45.600 09:22:24 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:45.600 09:22:24 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:45.601 09:22:24 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:45.601 09:22:24 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.601 --rc genhtml_branch_coverage=1 00:28:45.601 --rc genhtml_function_coverage=1 00:28:45.601 --rc genhtml_legend=1 00:28:45.601 --rc geninfo_all_blocks=1 00:28:45.601 --rc geninfo_unexecuted_blocks=1 00:28:45.601 00:28:45.601 ' 00:28:45.601 09:22:24 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.601 --rc genhtml_branch_coverage=1 00:28:45.601 --rc genhtml_function_coverage=1 00:28:45.601 --rc genhtml_legend=1 00:28:45.601 --rc geninfo_all_blocks=1 00:28:45.601 --rc geninfo_unexecuted_blocks=1 00:28:45.601 00:28:45.601 ' 00:28:45.601 09:22:24 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.601 --rc genhtml_branch_coverage=1 00:28:45.601 --rc genhtml_function_coverage=1 00:28:45.601 --rc genhtml_legend=1 00:28:45.601 --rc geninfo_all_blocks=1 00:28:45.601 --rc geninfo_unexecuted_blocks=1 00:28:45.601 00:28:45.601 ' 00:28:45.601 09:22:24 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:45.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:45.601 --rc genhtml_branch_coverage=1 00:28:45.601 --rc genhtml_function_coverage=1 00:28:45.601 --rc genhtml_legend=1 00:28:45.601 --rc geninfo_all_blocks=1 00:28:45.601 --rc geninfo_unexecuted_blocks=1 00:28:45.601 00:28:45.601 ' 00:28:45.601 09:22:24 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.601 09:22:24 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.601 09:22:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.601 09:22:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.601 09:22:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.601 09:22:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:45.601 09:22:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:28:45.601 09:22:24 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:45.601 09:22:24 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:45.601 09:22:24 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@50 -- # : 0 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:45.601 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:45.601 09:22:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:45.601 09:22:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:45.601 09:22:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:45.601 09:22:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:45.601 09:22:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:45.601 09:22:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.W5AiwwVjZ4 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:28:45.601 09:22:24 keyring_file -- nvmf/common.sh@507 -- # python - 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.W5AiwwVjZ4 00:28:45.601 09:22:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.W5AiwwVjZ4 00:28:45.602 09:22:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.W5AiwwVjZ4 00:28:45.602 09:22:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jo7GvIsQwe 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:45.602 09:22:24 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:45.602 09:22:24 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:28:45.602 09:22:24 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:28:45.602 09:22:24 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:28:45.602 09:22:24 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:28:45.602 09:22:24 keyring_file -- nvmf/common.sh@507 -- # python - 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jo7GvIsQwe 00:28:45.602 09:22:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jo7GvIsQwe 00:28:45.602 09:22:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.jo7GvIsQwe 00:28:45.602 09:22:24 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:45.602 09:22:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=111013 00:28:45.602 09:22:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 111013 00:28:45.602 09:22:24 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111013 ']' 00:28:45.602 09:22:24 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.602 09:22:24 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:45.602 09:22:24 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.602 09:22:24 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:45.602 09:22:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:45.602 [2024-11-20 09:22:24.496561] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:28:45.602 [2024-11-20 09:22:24.496990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111013 ] 00:28:45.861 [2024-11-20 09:22:24.646468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.861 [2024-11-20 09:22:24.716483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:46.798 09:22:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:46.798 [2024-11-20 09:22:25.455772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:46.798 null0 00:28:46.798 [2024-11-20 09:22:25.487723] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:46.798 [2024-11-20 09:22:25.488129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.798 09:22:25 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:46.798 [2024-11-20 09:22:25.519737] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:46.798 2024/11/20 09:22:25 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:28:46.798 request: 00:28:46.798 { 00:28:46.798 "method": "nvmf_subsystem_add_listener", 00:28:46.798 "params": { 00:28:46.798 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.798 "secure_channel": false, 00:28:46.798 "listen_address": { 00:28:46.798 "trtype": "tcp", 00:28:46.798 "traddr": "127.0.0.1", 00:28:46.798 "trsvcid": "4420" 00:28:46.798 } 00:28:46.798 } 00:28:46.798 } 00:28:46.798 Got JSON-RPC error response 00:28:46.798 GoRPCClient: error on JSON-RPC call 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:46.798 09:22:25 keyring_file -- keyring/file.sh@47 -- # bperfpid=111048 00:28:46.798 09:22:25 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:46.798 09:22:25 keyring_file -- keyring/file.sh@49 -- # waitforlisten 111048 /var/tmp/bperf.sock 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111048 ']' 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:46.798 09:22:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:46.798 [2024-11-20 09:22:25.581818] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:28:46.798 [2024-11-20 09:22:25.582160] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111048 ] 00:28:47.056 [2024-11-20 09:22:25.731169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.056 [2024-11-20 09:22:25.803098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.056 09:22:25 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:47.056 09:22:25 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:47.056 09:22:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:47.056 09:22:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:47.622 09:22:26 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jo7GvIsQwe 00:28:47.622 09:22:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jo7GvIsQwe 00:28:47.622 09:22:26 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:47.622 09:22:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:47.622 09:22:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:47.622 09:22:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:47.622 09:22:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:47.879 09:22:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.W5AiwwVjZ4 == \/\t\m\p\/\t\m\p\.\W\5\A\i\w\w\V\j\Z\4 ]] 00:28:47.879 09:22:26 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:47.879 09:22:26 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:47.879 09:22:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:47.879 09:22:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:47.879 09:22:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.444 09:22:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.jo7GvIsQwe == \/\t\m\p\/\t\m\p\.\j\o\7\G\v\I\s\Q\w\e ]] 00:28:48.444 09:22:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:48.444 09:22:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:48.444 09:22:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:48.444 09:22:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:48.444 09:22:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.444 09:22:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:48.715 09:22:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:48.715 09:22:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:48.715 09:22:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:48.715 09:22:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:48.715 09:22:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:48.715 09:22:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:48.715 09:22:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.972 09:22:27 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:48.972 09:22:27 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.973 09:22:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:49.231 [2024-11-20 09:22:27.963718] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:49.231 nvme0n1 00:28:49.231 09:22:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:49.231 09:22:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:49.231 09:22:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.231 09:22:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.231 09:22:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.231 09:22:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:49.490 09:22:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:49.490 09:22:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:49.490 09:22:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:49.490 09:22:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.490 09:22:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.490 09:22:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:49.490 09:22:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.748 09:22:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:49.748 09:22:28 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.006 Running I/O for 1 seconds... 00:28:50.936 11680.00 IOPS, 45.62 MiB/s 00:28:50.936 Latency(us) 00:28:50.936 [2024-11-20T09:22:29.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.936 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:50.936 nvme0n1 : 1.01 11732.40 45.83 0.00 0.00 10880.00 4289.63 20852.36 00:28:50.936 [2024-11-20T09:22:29.855Z] =================================================================================================================== 00:28:50.936 [2024-11-20T09:22:29.855Z] Total : 11732.40 45.83 0.00 0.00 10880.00 4289.63 20852.36 00:28:50.936 { 00:28:50.936 "results": [ 00:28:50.936 { 00:28:50.936 "job": "nvme0n1", 00:28:50.936 "core_mask": "0x2", 00:28:50.936 "workload": "randrw", 00:28:50.936 "percentage": 50, 00:28:50.936 "status": "finished", 00:28:50.936 "queue_depth": 128, 00:28:50.936 "io_size": 4096, 00:28:50.936 "runtime": 1.006529, 00:28:50.936 "iops": 11732.399165846191, 00:28:50.936 "mibps": 45.82968424158668, 00:28:50.936 "io_failed": 0, 00:28:50.936 "io_timeout": 0, 00:28:50.936 "avg_latency_us": 10880.000670059044, 00:28:50.936 "min_latency_us": 4289.629090909091, 00:28:50.936 "max_latency_us": 20852.363636363636 00:28:50.936 } 00:28:50.936 ], 00:28:50.936 "core_count": 1 00:28:50.936 } 00:28:50.936 09:22:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:50.936 09:22:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:51.501 09:22:30 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:51.501 09:22:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:51.501 09:22:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:51.501 09:22:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.501 09:22:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:51.501 09:22:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.760 09:22:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:51.760 09:22:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:51.760 09:22:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:51.760 09:22:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:51.760 09:22:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.760 09:22:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.760 09:22:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:52.019 09:22:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:52.019 09:22:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:52.019 09:22:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:52.019 09:22:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:52.019 09:22:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:52.019 09:22:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.019 09:22:30 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:52.019 09:22:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:52.019 09:22:30 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:52.019 09:22:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:52.278 [2024-11-20 09:22:31.013267] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:52.278 [2024-11-20 09:22:31.013691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2250fd0 (107): Transport endpoint is not connected 00:28:52.278 [2024-11-20 09:22:31.014679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2250fd0 (9): Bad file descriptor 00:28:52.278 [2024-11-20 09:22:31.015674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:52.278 [2024-11-20 09:22:31.015693] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:52.278 [2024-11-20 09:22:31.015705] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:52.278 [2024-11-20 09:22:31.015719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:52.278 2024/11/20 09:22:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:52.278 request: 00:28:52.278 { 00:28:52.278 "method": "bdev_nvme_attach_controller", 00:28:52.278 "params": { 00:28:52.278 "name": "nvme0", 00:28:52.278 "trtype": "tcp", 00:28:52.278 "traddr": "127.0.0.1", 00:28:52.278 "adrfam": "ipv4", 00:28:52.278 "trsvcid": "4420", 00:28:52.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:52.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:52.278 "prchk_reftag": false, 00:28:52.278 "prchk_guard": false, 00:28:52.278 "hdgst": false, 00:28:52.278 "ddgst": false, 00:28:52.278 "psk": "key1", 00:28:52.278 "allow_unrecognized_csi": false 00:28:52.278 } 00:28:52.278 } 00:28:52.278 Got JSON-RPC error response 00:28:52.278 GoRPCClient: error on JSON-RPC call 00:28:52.278 09:22:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:52.278 09:22:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:52.278 09:22:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:52.278 09:22:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:52.278 09:22:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:52.278 09:22:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:52.278 09:22:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.278 09:22:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.278 09:22:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:52.278 09:22:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.536 09:22:31 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:52.536 09:22:31 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:52.536 09:22:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:52.536 09:22:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.536 09:22:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.536 09:22:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:52.536 09:22:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.794 09:22:31 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:52.794 09:22:31 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:52.794 09:22:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:53.361 09:22:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:53.361 09:22:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:53.361 09:22:32 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:53.361 09:22:32 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:53.361 09:22:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.620 09:22:32 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:53.620 09:22:32 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.W5AiwwVjZ4 00:28:53.620 09:22:32 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:53.620 09:22:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:53.620 09:22:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:53.620 09:22:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:53.620 09:22:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.620 09:22:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:53.620 09:22:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:53.620 09:22:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:53.620 09:22:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:54.187 [2024-11-20 09:22:32.815017] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.W5AiwwVjZ4': 0100660 00:28:54.187 [2024-11-20 09:22:32.815069] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:54.187 2024/11/20 09:22:32 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.W5AiwwVjZ4], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:28:54.187 request: 00:28:54.187 { 00:28:54.187 "method": "keyring_file_add_key", 00:28:54.187 "params": { 00:28:54.187 "name": "key0", 00:28:54.187 "path": "/tmp/tmp.W5AiwwVjZ4" 00:28:54.187 } 00:28:54.187 } 00:28:54.187 Got JSON-RPC error response 00:28:54.187 GoRPCClient: error on JSON-RPC call 00:28:54.187 09:22:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:54.187 09:22:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:54.187 09:22:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:54.187 09:22:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:54.187 09:22:32 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.W5AiwwVjZ4 00:28:54.187 09:22:32 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:54.187 09:22:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.W5AiwwVjZ4 00:28:54.445 09:22:33 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.W5AiwwVjZ4 00:28:54.445 09:22:33 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:54.445 09:22:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:54.445 09:22:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.445 09:22:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.445 09:22:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.445 09:22:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.704 09:22:33 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:54.704 09:22:33 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.704 09:22:33 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:54.704 09:22:33 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.704 09:22:33 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:54.704 09:22:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.704 09:22:33 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:54.704 09:22:33 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:54.704 09:22:33 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.704 09:22:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:54.963 [2024-11-20 09:22:33.687212] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.W5AiwwVjZ4': No such file or directory 00:28:54.963 [2024-11-20 09:22:33.687266] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:54.963 [2024-11-20 09:22:33.687289] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:54.963 [2024-11-20 09:22:33.687300] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:54.963 [2024-11-20 09:22:33.687311] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:54.963 [2024-11-20 09:22:33.687320] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:54.963 2024/11/20 09:22:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:28:54.963 request: 00:28:54.963 { 00:28:54.963 "method": "bdev_nvme_attach_controller", 00:28:54.963 "params": { 00:28:54.963 "name": "nvme0", 00:28:54.963 "trtype": "tcp", 00:28:54.963 "traddr": "127.0.0.1", 00:28:54.963 "adrfam": "ipv4", 00:28:54.963 "trsvcid": "4420", 00:28:54.963 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:54.963 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:54.963 "prchk_reftag": false, 00:28:54.963 "prchk_guard": false, 00:28:54.963 "hdgst": false, 00:28:54.963 "ddgst": false, 00:28:54.963 "psk": "key0", 00:28:54.963 "allow_unrecognized_csi": false 00:28:54.963 } 00:28:54.963 } 00:28:54.963 Got JSON-RPC error response 00:28:54.963 GoRPCClient: error on JSON-RPC call 00:28:54.963 09:22:33 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:54.963 09:22:33 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:54.963 09:22:33 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:54.963 09:22:33 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:54.963 09:22:33 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:54.963 09:22:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:55.222 09:22:34 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rKz6PM5QIf 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:55.222 09:22:34 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:55.222 09:22:34 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:28:55.222 09:22:34 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:28:55.222 09:22:34 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:28:55.222 09:22:34 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:28:55.222 09:22:34 keyring_file -- nvmf/common.sh@507 -- # python - 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rKz6PM5QIf 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rKz6PM5QIf 00:28:55.222 09:22:34 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.rKz6PM5QIf 00:28:55.222 09:22:34 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rKz6PM5QIf 00:28:55.222 09:22:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rKz6PM5QIf 00:28:55.790 09:22:34 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:55.790 09:22:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:56.048 nvme0n1 00:28:56.048 09:22:34 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:56.048 09:22:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.048 09:22:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:56.048 09:22:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.048 09:22:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.048 09:22:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:56.306 09:22:35 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:56.306 09:22:35 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:56.306 09:22:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:56.874 09:22:35 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:56.874 09:22:35 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:56.874 09:22:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.874 09:22:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.874 09:22:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:57.133 09:22:35 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:57.133 09:22:35 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:57.133 09:22:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:57.133 09:22:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:57.133 09:22:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:57.133 09:22:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.133 09:22:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:57.407 09:22:36 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:57.407 09:22:36 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:57.407 09:22:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:57.665 09:22:36 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:57.665 09:22:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.665 09:22:36 keyring_file -- keyring/file.sh@105 -- # jq length 00:28:57.924 09:22:36 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:28:57.924 09:22:36 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rKz6PM5QIf 00:28:57.924 09:22:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rKz6PM5QIf 00:28:58.182 09:22:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.jo7GvIsQwe 00:28:58.182 09:22:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.jo7GvIsQwe 00:28:58.441 09:22:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.441 09:22:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.006 nvme0n1 00:28:59.006 09:22:37 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:28:59.006 09:22:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:59.264 09:22:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:28:59.265 "subsystems": [ 00:28:59.265 { 00:28:59.265 "subsystem": "keyring", 00:28:59.265 "config": [ 00:28:59.265 { 00:28:59.265 "method": "keyring_file_add_key", 00:28:59.265 "params": { 00:28:59.265 "name": "key0", 00:28:59.265 "path": "/tmp/tmp.rKz6PM5QIf" 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "keyring_file_add_key", 00:28:59.265 "params": { 00:28:59.265 "name": "key1", 00:28:59.265 "path": "/tmp/tmp.jo7GvIsQwe" 00:28:59.265 } 00:28:59.265 } 00:28:59.265 ] 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "subsystem": "iobuf", 00:28:59.265 "config": [ 00:28:59.265 { 00:28:59.265 "method": "iobuf_set_options", 00:28:59.265 "params": { 00:28:59.265 "enable_numa": false, 00:28:59.265 "large_bufsize": 135168, 00:28:59.265 "large_pool_count": 1024, 00:28:59.265 "small_bufsize": 8192, 00:28:59.265 "small_pool_count": 8192 00:28:59.265 } 00:28:59.265 } 00:28:59.265 ] 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "subsystem": "sock", 00:28:59.265 "config": [ 00:28:59.265 { 00:28:59.265 "method": "sock_set_default_impl", 00:28:59.265 "params": { 00:28:59.265 "impl_name": "posix" 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "sock_impl_set_options", 00:28:59.265 "params": { 00:28:59.265 "enable_ktls": false, 00:28:59.265 "enable_placement_id": 0, 00:28:59.265 "enable_quickack": false, 00:28:59.265 "enable_recv_pipe": true, 00:28:59.265 "enable_zerocopy_send_client": false, 00:28:59.265 "enable_zerocopy_send_server": true, 00:28:59.265 "impl_name": "ssl", 00:28:59.265 "recv_buf_size": 4096, 00:28:59.265 "send_buf_size": 4096, 00:28:59.265 "tls_version": 0, 00:28:59.265 "zerocopy_threshold": 0 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "sock_impl_set_options", 00:28:59.265 "params": { 00:28:59.265 "enable_ktls": false, 00:28:59.265 "enable_placement_id": 0, 00:28:59.265 "enable_quickack": false, 00:28:59.265 "enable_recv_pipe": true, 00:28:59.265 "enable_zerocopy_send_client": false, 00:28:59.265 "enable_zerocopy_send_server": true, 00:28:59.265 "impl_name": "posix", 00:28:59.265 "recv_buf_size": 2097152, 00:28:59.265 "send_buf_size": 2097152, 00:28:59.265 "tls_version": 0, 00:28:59.265 "zerocopy_threshold": 0 00:28:59.265 } 00:28:59.265 } 00:28:59.265 ] 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "subsystem": "vmd", 00:28:59.265 "config": [] 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "subsystem": "accel", 00:28:59.265 "config": [ 00:28:59.265 { 00:28:59.265 "method": "accel_set_options", 00:28:59.265 "params": { 00:28:59.265 "buf_count": 2048, 00:28:59.265 "large_cache_size": 16, 00:28:59.265 "sequence_count": 2048, 00:28:59.265 "small_cache_size": 128, 00:28:59.265 "task_count": 2048 00:28:59.265 } 00:28:59.265 } 00:28:59.265 ] 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "subsystem": "bdev", 00:28:59.265 "config": [ 00:28:59.265 { 00:28:59.265 "method": "bdev_set_options", 00:28:59.265 "params": { 00:28:59.265 "bdev_auto_examine": true, 00:28:59.265 "bdev_io_cache_size": 256, 00:28:59.265 "bdev_io_pool_size": 65535, 00:28:59.265 "iobuf_large_cache_size": 16, 00:28:59.265 "iobuf_small_cache_size": 128 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "bdev_raid_set_options", 00:28:59.265 "params": { 00:28:59.265 "process_max_bandwidth_mb_sec": 0, 00:28:59.265 "process_window_size_kb": 1024 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "bdev_iscsi_set_options", 00:28:59.265 "params": { 00:28:59.265 "timeout_sec": 30 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "bdev_nvme_set_options", 00:28:59.265 "params": { 00:28:59.265 "action_on_timeout": "none", 00:28:59.265 "allow_accel_sequence": false, 00:28:59.265 "arbitration_burst": 0, 00:28:59.265 "bdev_retry_count": 3, 00:28:59.265 "ctrlr_loss_timeout_sec": 0, 00:28:59.265 "delay_cmd_submit": true, 00:28:59.265 "dhchap_dhgroups": [ 00:28:59.265 "null", 00:28:59.265 "ffdhe2048", 00:28:59.265 "ffdhe3072", 00:28:59.265 "ffdhe4096", 00:28:59.265 "ffdhe6144", 00:28:59.265 "ffdhe8192" 00:28:59.265 ], 00:28:59.265 "dhchap_digests": [ 00:28:59.265 "sha256", 00:28:59.265 "sha384", 00:28:59.265 "sha512" 00:28:59.265 ], 00:28:59.265 "disable_auto_failback": false, 00:28:59.265 "fast_io_fail_timeout_sec": 0, 00:28:59.265 "generate_uuids": false, 00:28:59.265 "high_priority_weight": 0, 00:28:59.265 "io_path_stat": false, 00:28:59.265 "io_queue_requests": 512, 00:28:59.265 "keep_alive_timeout_ms": 10000, 00:28:59.265 "low_priority_weight": 0, 00:28:59.265 "medium_priority_weight": 0, 00:28:59.265 "nvme_adminq_poll_period_us": 10000, 00:28:59.265 "nvme_error_stat": false, 00:28:59.265 "nvme_ioq_poll_period_us": 0, 00:28:59.265 "rdma_cm_event_timeout_ms": 0, 00:28:59.265 "rdma_max_cq_size": 0, 00:28:59.265 "rdma_srq_size": 0, 00:28:59.265 "reconnect_delay_sec": 0, 00:28:59.265 "timeout_admin_us": 0, 00:28:59.265 "timeout_us": 0, 00:28:59.265 "transport_ack_timeout": 0, 00:28:59.265 "transport_retry_count": 4, 00:28:59.265 "transport_tos": 0 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "bdev_nvme_attach_controller", 00:28:59.265 "params": { 00:28:59.265 "adrfam": "IPv4", 00:28:59.265 "ctrlr_loss_timeout_sec": 0, 00:28:59.265 "ddgst": false, 00:28:59.265 "fast_io_fail_timeout_sec": 0, 00:28:59.265 "hdgst": false, 00:28:59.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:59.265 "multipath": "multipath", 00:28:59.265 "name": "nvme0", 00:28:59.265 "prchk_guard": false, 00:28:59.265 "prchk_reftag": false, 00:28:59.265 "psk": "key0", 00:28:59.265 "reconnect_delay_sec": 0, 00:28:59.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.265 "traddr": "127.0.0.1", 00:28:59.265 "trsvcid": "4420", 00:28:59.265 "trtype": "TCP" 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "bdev_nvme_set_hotplug", 00:28:59.265 "params": { 00:28:59.265 "enable": false, 00:28:59.265 "period_us": 100000 00:28:59.265 } 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "method": "bdev_wait_for_examine" 00:28:59.265 } 00:28:59.265 ] 00:28:59.265 }, 00:28:59.265 { 00:28:59.265 "subsystem": "nbd", 00:28:59.265 "config": [] 00:28:59.265 } 00:28:59.265 ] 00:28:59.266 }' 00:28:59.266 09:22:38 keyring_file -- keyring/file.sh@115 -- # killprocess 111048 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111048 ']' 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111048 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111048 00:28:59.266 killing process with pid 111048 00:28:59.266 Received shutdown signal, test time was about 1.000000 seconds 00:28:59.266 00:28:59.266 Latency(us) 00:28:59.266 [2024-11-20T09:22:38.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.266 [2024-11-20T09:22:38.185Z] =================================================================================================================== 00:28:59.266 [2024-11-20T09:22:38.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111048' 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@973 -- # kill 111048 00:28:59.266 09:22:38 keyring_file -- common/autotest_common.sh@978 -- # wait 111048 00:28:59.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:59.524 09:22:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=111519 00:28:59.524 09:22:38 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:59.524 09:22:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 111519 /var/tmp/bperf.sock 00:28:59.524 09:22:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 111519 ']' 00:28:59.524 09:22:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:59.524 09:22:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:28:59.524 "subsystems": [ 00:28:59.524 { 00:28:59.524 "subsystem": "keyring", 00:28:59.524 "config": [ 00:28:59.524 { 00:28:59.524 "method": "keyring_file_add_key", 00:28:59.525 "params": { 00:28:59.525 "name": "key0", 00:28:59.525 "path": "/tmp/tmp.rKz6PM5QIf" 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "keyring_file_add_key", 00:28:59.525 "params": { 00:28:59.525 "name": "key1", 00:28:59.525 "path": "/tmp/tmp.jo7GvIsQwe" 00:28:59.525 } 00:28:59.525 } 00:28:59.525 ] 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "subsystem": "iobuf", 00:28:59.525 "config": [ 00:28:59.525 { 00:28:59.525 "method": "iobuf_set_options", 00:28:59.525 "params": { 00:28:59.525 "enable_numa": false, 00:28:59.525 "large_bufsize": 135168, 00:28:59.525 "large_pool_count": 1024, 00:28:59.525 "small_bufsize": 8192, 00:28:59.525 "small_pool_count": 8192 00:28:59.525 } 00:28:59.525 } 00:28:59.525 ] 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "subsystem": "sock", 00:28:59.525 "config": [ 00:28:59.525 { 00:28:59.525 "method": "sock_set_default_impl", 00:28:59.525 "params": { 00:28:59.525 "impl_name": "posix" 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "sock_impl_set_options", 00:28:59.525 "params": { 00:28:59.525 "enable_ktls": false, 00:28:59.525 "enable_placement_id": 0, 00:28:59.525 "enable_quickack": false, 00:28:59.525 "enable_recv_pipe": true, 00:28:59.525 "enable_zerocopy_send_client": false, 00:28:59.525 "enable_zerocopy_send_server": true, 00:28:59.525 "impl_name": "ssl", 00:28:59.525 "recv_buf_size": 4096, 00:28:59.525 "send_buf_size": 4096, 00:28:59.525 "tls_version": 0, 00:28:59.525 "zerocopy_threshold": 0 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "sock_impl_set_options", 00:28:59.525 "params": { 00:28:59.525 "enable_ktls": false, 00:28:59.525 "enable_placement_id": 0, 00:28:59.525 "enable_quickack": false, 00:28:59.525 "enable_recv_pipe": true, 00:28:59.525 "enable_zerocopy_send_client": false, 00:28:59.525 "enable_zerocopy_send_server": true, 00:28:59.525 "impl_name": "posix", 00:28:59.525 "recv_buf_size": 2097152, 00:28:59.525 "send_buf_size": 2097152, 00:28:59.525 "tls_version": 0, 00:28:59.525 "zerocopy_threshold": 0 00:28:59.525 } 00:28:59.525 } 00:28:59.525 ] 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "subsystem": "vmd", 00:28:59.525 "config": [] 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "subsystem": "accel", 00:28:59.525 "config": [ 00:28:59.525 { 00:28:59.525 "method": "accel_set_options", 00:28:59.525 "params": { 00:28:59.525 "buf_count": 2048, 00:28:59.525 "large_cache_size": 16, 00:28:59.525 "sequence_count": 2048, 00:28:59.525 "small_cache_size": 128, 00:28:59.525 "task_count": 2048 00:28:59.525 } 00:28:59.525 } 00:28:59.525 ] 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "subsystem": "bdev", 00:28:59.525 "config": [ 00:28:59.525 { 00:28:59.525 "method": "bdev_set_options", 00:28:59.525 "params": { 00:28:59.525 "bdev_auto_examine": true, 00:28:59.525 "bdev_io_cache_size": 256, 00:28:59.525 "bdev_io_pool_size": 65535, 00:28:59.525 "iobuf_large_cache_size": 16, 00:28:59.525 "iobuf_small_cache_size": 128 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "bdev_raid_set_options", 00:28:59.525 "params": { 00:28:59.525 "process_max_bandwidth_mb_sec": 0, 00:28:59.525 "process_window_size_kb": 1024 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "bdev_iscsi_set_options", 00:28:59.525 "params": { 00:28:59.525 "timeout_sec": 30 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "bdev_nvme_set_options", 00:28:59.525 "params": { 00:28:59.525 "action_on_timeout": "none", 00:28:59.525 "allow_accel_sequence": false, 00:28:59.525 "arbitration_burst": 0, 00:28:59.525 "bdev_retry_count": 3, 00:28:59.525 "ctrlr_loss_timeout_sec": 0, 00:28:59.525 "delay_cmd_submit": true, 00:28:59.525 "dhchap_dhgroups": [ 00:28:59.525 "null", 00:28:59.525 "ffdhe2048", 00:28:59.525 "ffdhe3072", 00:28:59.525 "ffdhe4096", 00:28:59.525 "ffdhe6144", 00:28:59.525 "ffdhe8192" 00:28:59.525 ], 00:28:59.525 "dhchap_digests": [ 00:28:59.525 "sha256", 00:28:59.525 "sha384", 00:28:59.525 "sha512" 00:28:59.525 ], 00:28:59.525 "disable_auto_failback": false, 00:28:59.525 "fast_io_fail_timeout_sec": 0, 00:28:59.525 "generate_uuids": false, 00:28:59.525 "high_priority_weight": 0, 00:28:59.525 "io_path_stat": false, 00:28:59.525 "io_queue_requests": 512, 00:28:59.525 "keep_alive_timeout_ms": 10000, 00:28:59.525 "low_priority_weight": 0, 00:28:59.525 "medium_priority_weight": 0, 00:28:59.525 "nvme_adminq_poll_period_us": 10000, 00:28:59.525 "nvme_error_stat": false, 00:28:59.525 "nvme_ioq_poll_period_us": 0, 00:28:59.525 "rdma_cm_event_timeout_ms": 0, 00:28:59.525 "rdma_max_cq_size": 0, 00:28:59.525 "rdma_srq_size": 0, 00:28:59.525 "reconnect_delay_sec": 0, 00:28:59.525 "timeout_admin_us": 0, 00:28:59.525 "timeout_us": 0, 00:28:59.525 "transport_ack_timeout": 0, 00:28:59.525 "transport_retry_count": 4, 00:28:59.525 "transport_tos": 0 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "bdev_nvme_attach_controller", 00:28:59.525 "params": { 00:28:59.525 "adrfam": "IPv4", 00:28:59.525 "ctrlr_loss_timeout_sec": 0, 00:28:59.525 "ddgst": false, 00:28:59.525 "fast_io_fail_timeout_sec": 0, 00:28:59.525 "hdgst": false, 00:28:59.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:59.525 "multipath": "multipath", 00:28:59.525 "name": "nvme0", 00:28:59.525 "prchk_guard": false, 00:28:59.525 "prchk_reftag": false, 00:28:59.525 "psk": "key0", 00:28:59.525 "reconnect_delay_sec": 0, 00:28:59.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.525 "traddr": "127.0.0.1", 00:28:59.525 "trsvcid": "4420", 00:28:59.525 "trtype": "TCP" 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "bdev_nvme_set_hotplug", 00:28:59.525 "params": { 00:28:59.525 "enable": false, 00:28:59.525 "period_us": 100000 00:28:59.525 } 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "method": "bdev_wait_for_examine" 00:28:59.525 } 00:28:59.525 ] 00:28:59.525 }, 00:28:59.525 { 00:28:59.525 "subsystem": "nbd", 00:28:59.525 "config": [] 00:28:59.525 } 00:28:59.525 ] 00:28:59.526 }' 00:28:59.526 09:22:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.526 09:22:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:59.526 09:22:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.526 09:22:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:59.526 [2024-11-20 09:22:38.309908] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:28:59.526 [2024-11-20 09:22:38.310195] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111519 ] 00:28:59.784 [2024-11-20 09:22:38.453508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.784 [2024-11-20 09:22:38.522396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.042 [2024-11-20 09:22:38.711342] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:00.608 09:22:39 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.608 09:22:39 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:00.608 09:22:39 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:00.608 09:22:39 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:00.608 09:22:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.866 09:22:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:00.866 09:22:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:00.866 09:22:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:00.866 09:22:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.866 09:22:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.866 09:22:39 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.866 09:22:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:01.433 09:22:40 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:01.433 09:22:40 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:01.433 09:22:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:01.433 09:22:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:01.433 09:22:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:01.433 09:22:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:01.433 09:22:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:01.696 09:22:40 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:01.696 09:22:40 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:01.696 09:22:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:01.696 09:22:40 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:01.968 09:22:40 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:01.968 09:22:40 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:01.968 09:22:40 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.rKz6PM5QIf /tmp/tmp.jo7GvIsQwe 00:29:01.968 09:22:40 keyring_file -- keyring/file.sh@20 -- # killprocess 111519 00:29:01.968 09:22:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111519 ']' 00:29:01.968 09:22:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111519 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111519 00:29:01.969 killing process with pid 111519 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111519' 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@973 -- # kill 111519 00:29:01.969 Received shutdown signal, test time was about 1.000000 seconds 00:29:01.969 00:29:01.969 Latency(us) 00:29:01.969 [2024-11-20T09:22:40.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.969 [2024-11-20T09:22:40.888Z] =================================================================================================================== 00:29:01.969 [2024-11-20T09:22:40.888Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:01.969 09:22:40 keyring_file -- common/autotest_common.sh@978 -- # wait 111519 00:29:02.227 09:22:41 keyring_file -- keyring/file.sh@21 -- # killprocess 111013 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 111013 ']' 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 111013 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111013 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:02.227 killing process with pid 111013 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111013' 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@973 -- # kill 111013 00:29:02.227 09:22:41 keyring_file -- common/autotest_common.sh@978 -- # wait 111013 00:29:02.795 00:29:02.795 real 0m17.394s 00:29:02.795 user 0m44.059s 00:29:02.795 sys 0m3.432s 00:29:02.795 09:22:41 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.795 09:22:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:02.795 ************************************ 00:29:02.795 END TEST keyring_file 00:29:02.795 ************************************ 00:29:02.795 09:22:41 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:29:02.795 09:22:41 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:02.795 09:22:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:02.795 09:22:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.795 09:22:41 -- common/autotest_common.sh@10 -- # set +x 00:29:02.795 ************************************ 00:29:02.795 START TEST keyring_linux 00:29:02.795 ************************************ 00:29:02.795 09:22:41 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:02.795 Joined session keyring: 498921432 00:29:02.795 * Looking for test storage... 00:29:02.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:02.795 09:22:41 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:02.795 09:22:41 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:02.795 09:22:41 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:29:03.054 09:22:41 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.054 09:22:41 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:03.054 09:22:41 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.054 09:22:41 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:03.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.054 --rc genhtml_branch_coverage=1 00:29:03.054 --rc genhtml_function_coverage=1 00:29:03.054 --rc genhtml_legend=1 00:29:03.054 --rc geninfo_all_blocks=1 00:29:03.054 --rc geninfo_unexecuted_blocks=1 00:29:03.054 00:29:03.054 ' 00:29:03.054 09:22:41 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:03.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.054 --rc genhtml_branch_coverage=1 00:29:03.054 --rc genhtml_function_coverage=1 00:29:03.054 --rc genhtml_legend=1 00:29:03.054 --rc geninfo_all_blocks=1 00:29:03.054 --rc geninfo_unexecuted_blocks=1 00:29:03.054 00:29:03.054 ' 00:29:03.054 09:22:41 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:03.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.054 --rc genhtml_branch_coverage=1 00:29:03.054 --rc genhtml_function_coverage=1 00:29:03.054 --rc genhtml_legend=1 00:29:03.054 --rc geninfo_all_blocks=1 00:29:03.054 --rc geninfo_unexecuted_blocks=1 00:29:03.054 00:29:03.055 ' 00:29:03.055 09:22:41 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:03.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.055 --rc genhtml_branch_coverage=1 00:29:03.055 --rc genhtml_function_coverage=1 00:29:03.055 --rc genhtml_legend=1 00:29:03.055 --rc geninfo_all_blocks=1 00:29:03.055 --rc geninfo_unexecuted_blocks=1 00:29:03.055 00:29:03.055 ' 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:44ab6922-625f-4dd5-abd7-64d78c556468 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=44ab6922-625f-4dd5-abd7-64d78c556468 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=virt 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:03.055 09:22:41 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.055 09:22:41 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.055 09:22:41 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.055 09:22:41 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.055 09:22:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.055 09:22:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.055 09:22:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.055 09:22:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:03.055 09:22:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:29:03.055 09:22:41 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:03.055 09:22:41 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:03.055 09:22:41 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:03.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@507 -- # python - 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:03.055 /tmp/:spdk-test:key0 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:29:03.055 09:22:41 keyring_linux -- nvmf/common.sh@507 -- # python - 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:03.055 /tmp/:spdk-test:key1 00:29:03.055 09:22:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=111684 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:03.055 09:22:41 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 111684 00:29:03.055 09:22:41 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111684 ']' 00:29:03.055 09:22:41 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.055 09:22:41 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:03.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.055 09:22:41 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.055 09:22:41 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:03.055 09:22:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:03.055 [2024-11-20 09:22:41.925310] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:29:03.055 [2024-11-20 09:22:41.925432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111684 ] 00:29:03.314 [2024-11-20 09:22:42.074425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.314 [2024-11-20 09:22:42.148931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.251 09:22:42 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.251 09:22:42 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:04.251 09:22:42 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:04.251 09:22:42 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.251 09:22:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:04.251 [2024-11-20 09:22:43.005054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:04.251 null0 00:29:04.251 [2024-11-20 09:22:43.037048] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:04.251 [2024-11-20 09:22:43.037263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:04.251 09:22:43 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.251 09:22:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:04.251 687278373 00:29:04.251 09:22:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:04.251 725210880 00:29:04.251 09:22:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=111720 00:29:04.251 09:22:43 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:04.251 09:22:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 111720 /var/tmp/bperf.sock 00:29:04.251 09:22:43 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 111720 ']' 00:29:04.251 09:22:43 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.251 09:22:43 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.251 09:22:43 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.251 09:22:43 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.251 09:22:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:04.251 [2024-11-20 09:22:43.113316] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:29:04.251 [2024-11-20 09:22:43.113419] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111720 ] 00:29:04.510 [2024-11-20 09:22:43.258186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.510 [2024-11-20 09:22:43.324792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.510 09:22:43 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.510 09:22:43 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:04.510 09:22:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:04.510 09:22:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:04.768 09:22:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:04.768 09:22:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.335 09:22:44 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:05.335 09:22:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:05.593 [2024-11-20 09:22:44.389892] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:05.593 nvme0n1 00:29:05.593 09:22:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:05.593 09:22:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:05.593 09:22:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:05.593 09:22:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:05.593 09:22:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:05.593 09:22:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.160 09:22:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:06.160 09:22:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:06.160 09:22:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:06.160 09:22:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:06.160 09:22:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.160 09:22:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:06.160 09:22:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.428 09:22:45 keyring_linux -- keyring/linux.sh@25 -- # sn=687278373 00:29:06.428 09:22:45 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:06.428 09:22:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:06.428 09:22:45 keyring_linux -- keyring/linux.sh@26 -- # [[ 687278373 == \6\8\7\2\7\8\3\7\3 ]] 00:29:06.428 09:22:45 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 687278373 00:29:06.428 09:22:45 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:06.428 09:22:45 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.428 Running I/O for 1 seconds... 00:29:07.381 12595.00 IOPS, 49.20 MiB/s 00:29:07.381 Latency(us) 00:29:07.381 [2024-11-20T09:22:46.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.381 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:07.381 nvme0n1 : 1.01 12611.23 49.26 0.00 0.00 10100.04 7238.75 20137.43 00:29:07.381 [2024-11-20T09:22:46.300Z] =================================================================================================================== 00:29:07.381 [2024-11-20T09:22:46.300Z] Total : 12611.23 49.26 0.00 0.00 10100.04 7238.75 20137.43 00:29:07.381 { 00:29:07.381 "results": [ 00:29:07.381 { 00:29:07.381 "job": "nvme0n1", 00:29:07.381 "core_mask": "0x2", 00:29:07.381 "workload": "randread", 00:29:07.381 "status": "finished", 00:29:07.381 "queue_depth": 128, 00:29:07.381 "io_size": 4096, 00:29:07.381 "runtime": 1.008942, 00:29:07.381 "iops": 12611.230377960279, 00:29:07.381 "mibps": 49.26261866390734, 00:29:07.381 "io_failed": 0, 00:29:07.381 "io_timeout": 0, 00:29:07.381 "avg_latency_us": 10100.041139721645, 00:29:07.381 "min_latency_us": 7238.749090909091, 00:29:07.381 "max_latency_us": 20137.425454545453 00:29:07.381 } 00:29:07.381 ], 00:29:07.381 "core_count": 1 00:29:07.381 } 00:29:07.381 09:22:46 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:07.381 09:22:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:07.640 09:22:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:07.640 09:22:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:07.640 09:22:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:07.641 09:22:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:07.641 09:22:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:07.641 09:22:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.207 09:22:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:08.207 09:22:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:08.207 09:22:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:08.207 09:22:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:08.207 09:22:46 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:29:08.207 09:22:46 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:08.208 09:22:46 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:08.208 09:22:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.208 09:22:46 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:08.208 09:22:46 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:08.208 09:22:46 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:08.208 09:22:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:08.465 [2024-11-20 09:22:47.221051] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:08.465 [2024-11-20 09:22:47.222009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5cf50 (107): Transport endpoint is not connected 00:29:08.465 [2024-11-20 09:22:47.222998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5cf50 (9): Bad file descriptor 00:29:08.465 [2024-11-20 09:22:47.223996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:08.465 [2024-11-20 09:22:47.224022] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:08.465 [2024-11-20 09:22:47.224033] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:08.465 [2024-11-20 09:22:47.224046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:08.465 2024/11/20 09:22:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:08.465 request: 00:29:08.465 { 00:29:08.465 "method": "bdev_nvme_attach_controller", 00:29:08.465 "params": { 00:29:08.465 "name": "nvme0", 00:29:08.465 "trtype": "tcp", 00:29:08.465 "traddr": "127.0.0.1", 00:29:08.465 "adrfam": "ipv4", 00:29:08.465 "trsvcid": "4420", 00:29:08.465 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.465 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.465 "prchk_reftag": false, 00:29:08.465 "prchk_guard": false, 00:29:08.465 "hdgst": false, 00:29:08.465 "ddgst": false, 00:29:08.465 "psk": ":spdk-test:key1", 00:29:08.465 "allow_unrecognized_csi": false 00:29:08.465 } 00:29:08.465 } 00:29:08.465 Got JSON-RPC error response 00:29:08.465 GoRPCClient: error on JSON-RPC call 00:29:08.465 09:22:47 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:08.465 09:22:47 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@33 -- # sn=687278373 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 687278373 00:29:08.466 1 links removed 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@33 -- # sn=725210880 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 725210880 00:29:08.466 1 links removed 00:29:08.466 09:22:47 keyring_linux -- keyring/linux.sh@41 -- # killprocess 111720 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111720 ']' 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111720 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111720 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:08.466 killing process with pid 111720 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111720' 00:29:08.466 Received shutdown signal, test time was about 1.000000 seconds 00:29:08.466 00:29:08.466 Latency(us) 00:29:08.466 [2024-11-20T09:22:47.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.466 [2024-11-20T09:22:47.385Z] =================================================================================================================== 00:29:08.466 [2024-11-20T09:22:47.385Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@973 -- # kill 111720 00:29:08.466 09:22:47 keyring_linux -- common/autotest_common.sh@978 -- # wait 111720 00:29:08.724 09:22:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 111684 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 111684 ']' 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 111684 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111684 00:29:08.724 killing process with pid 111684 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111684' 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@973 -- # kill 111684 00:29:08.724 09:22:47 keyring_linux -- common/autotest_common.sh@978 -- # wait 111684 00:29:09.292 ************************************ 00:29:09.292 END TEST keyring_linux 00:29:09.292 ************************************ 00:29:09.292 00:29:09.292 real 0m6.388s 00:29:09.292 user 0m12.495s 00:29:09.292 sys 0m1.708s 00:29:09.292 09:22:47 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:09.292 09:22:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:09.292 09:22:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:09.292 09:22:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:09.292 09:22:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:09.292 09:22:47 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:09.292 09:22:47 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:09.292 09:22:47 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:09.292 09:22:47 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:09.292 09:22:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.292 09:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:09.292 09:22:47 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:09.292 09:22:47 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:09.292 09:22:47 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:09.292 09:22:47 -- common/autotest_common.sh@10 -- # set +x 00:29:11.239 INFO: APP EXITING 00:29:11.239 INFO: killing all VMs 00:29:11.239 INFO: killing vhost app 00:29:11.239 INFO: EXIT DONE 00:29:11.497 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:11.497 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:11.497 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:12.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:12.432 Cleaning 00:29:12.432 Removing: /var/run/dpdk/spdk0/config 00:29:12.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:12.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:12.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:12.432 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:12.432 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:12.432 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:12.432 Removing: /var/run/dpdk/spdk1/config 00:29:12.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:12.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:12.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:12.432 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:12.432 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:12.432 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:12.432 Removing: /var/run/dpdk/spdk2/config 00:29:12.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:12.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:12.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:12.432 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:12.432 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:12.432 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:12.432 Removing: /var/run/dpdk/spdk3/config 00:29:12.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:12.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:12.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:12.432 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:12.432 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:12.432 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:12.432 Removing: /var/run/dpdk/spdk4/config 00:29:12.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:12.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:12.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:12.432 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:12.432 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:12.432 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:12.432 Removing: /dev/shm/nvmf_trace.0 00:29:12.432 Removing: /dev/shm/spdk_tgt_trace.pid58680 00:29:12.432 Removing: /var/run/dpdk/spdk0 00:29:12.432 Removing: /var/run/dpdk/spdk1 00:29:12.432 Removing: /var/run/dpdk/spdk2 00:29:12.432 Removing: /var/run/dpdk/spdk3 00:29:12.432 Removing: /var/run/dpdk/spdk4 00:29:12.432 Removing: /var/run/dpdk/spdk_pid101548 00:29:12.432 Removing: /var/run/dpdk/spdk_pid101588 00:29:12.432 Removing: /var/run/dpdk/spdk_pid101937 00:29:12.432 Removing: /var/run/dpdk/spdk_pid101983 00:29:12.432 Removing: /var/run/dpdk/spdk_pid102376 00:29:12.432 Removing: /var/run/dpdk/spdk_pid102938 00:29:12.432 Removing: /var/run/dpdk/spdk_pid103370 00:29:12.432 Removing: /var/run/dpdk/spdk_pid104382 00:29:12.432 Removing: /var/run/dpdk/spdk_pid106038 00:29:12.432 Removing: /var/run/dpdk/spdk_pid106948 00:29:12.432 Removing: /var/run/dpdk/spdk_pid107056 00:29:12.432 Removing: /var/run/dpdk/spdk_pid107125 00:29:12.432 Removing: /var/run/dpdk/spdk_pid107510 00:29:12.432 Removing: /var/run/dpdk/spdk_pid107828 00:29:12.432 Removing: /var/run/dpdk/spdk_pid108388 00:29:12.432 Removing: /var/run/dpdk/spdk_pid108393 00:29:12.432 Removing: /var/run/dpdk/spdk_pid108794 00:29:12.432 Removing: /var/run/dpdk/spdk_pid108949 00:29:12.432 Removing: /var/run/dpdk/spdk_pid109105 00:29:12.432 Removing: /var/run/dpdk/spdk_pid109198 00:29:12.432 Removing: /var/run/dpdk/spdk_pid109355 00:29:12.432 Removing: /var/run/dpdk/spdk_pid109460 00:29:12.432 Removing: /var/run/dpdk/spdk_pid110169 00:29:12.432 Removing: /var/run/dpdk/spdk_pid110200 00:29:12.432 Removing: /var/run/dpdk/spdk_pid110235 00:29:12.432 Removing: /var/run/dpdk/spdk_pid110493 00:29:12.432 Removing: /var/run/dpdk/spdk_pid110530 00:29:12.432 Removing: /var/run/dpdk/spdk_pid110561 00:29:12.432 Removing: /var/run/dpdk/spdk_pid111013 00:29:12.432 Removing: /var/run/dpdk/spdk_pid111048 00:29:12.432 Removing: /var/run/dpdk/spdk_pid111519 00:29:12.432 Removing: /var/run/dpdk/spdk_pid111684 00:29:12.432 Removing: /var/run/dpdk/spdk_pid111720 00:29:12.432 Removing: /var/run/dpdk/spdk_pid58527 00:29:12.432 Removing: /var/run/dpdk/spdk_pid58680 00:29:12.432 Removing: /var/run/dpdk/spdk_pid58955 00:29:12.432 Removing: /var/run/dpdk/spdk_pid59047 00:29:12.691 Removing: /var/run/dpdk/spdk_pid59073 00:29:12.691 Removing: /var/run/dpdk/spdk_pid59183 00:29:12.691 Removing: /var/run/dpdk/spdk_pid59213 00:29:12.691 Removing: /var/run/dpdk/spdk_pid59352 00:29:12.691 Removing: /var/run/dpdk/spdk_pid59637 00:29:12.691 Removing: /var/run/dpdk/spdk_pid59821 00:29:12.691 Removing: /var/run/dpdk/spdk_pid59906 00:29:12.691 Removing: /var/run/dpdk/spdk_pid60006 00:29:12.691 Removing: /var/run/dpdk/spdk_pid60101 00:29:12.691 Removing: /var/run/dpdk/spdk_pid60134 00:29:12.691 Removing: /var/run/dpdk/spdk_pid60164 00:29:12.691 Removing: /var/run/dpdk/spdk_pid60239 00:29:12.691 Removing: /var/run/dpdk/spdk_pid60343 00:29:12.691 Removing: /var/run/dpdk/spdk_pid60979 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61037 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61098 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61113 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61193 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61221 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61305 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61334 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61386 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61408 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61454 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61488 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61655 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61685 00:29:12.691 Removing: /var/run/dpdk/spdk_pid61767 00:29:12.691 Removing: /var/run/dpdk/spdk_pid62256 00:29:12.691 Removing: /var/run/dpdk/spdk_pid62649 00:29:12.691 Removing: /var/run/dpdk/spdk_pid65168 00:29:12.691 Removing: /var/run/dpdk/spdk_pid65219 00:29:12.691 Removing: /var/run/dpdk/spdk_pid65582 00:29:12.691 Removing: /var/run/dpdk/spdk_pid65632 00:29:12.691 Removing: /var/run/dpdk/spdk_pid66042 00:29:12.691 Removing: /var/run/dpdk/spdk_pid66620 00:29:12.691 Removing: /var/run/dpdk/spdk_pid67080 00:29:12.691 Removing: /var/run/dpdk/spdk_pid68145 00:29:12.691 Removing: /var/run/dpdk/spdk_pid69861 00:29:12.691 Removing: /var/run/dpdk/spdk_pid70777 00:29:12.691 Removing: /var/run/dpdk/spdk_pid70890 00:29:12.691 Removing: /var/run/dpdk/spdk_pid70963 00:29:12.691 Removing: /var/run/dpdk/spdk_pid71384 00:29:12.691 Removing: /var/run/dpdk/spdk_pid75233 00:29:12.691 Removing: /var/run/dpdk/spdk_pid75650 00:29:12.691 Removing: /var/run/dpdk/spdk_pid76267 00:29:12.691 Removing: /var/run/dpdk/spdk_pid76791 00:29:12.691 Removing: /var/run/dpdk/spdk_pid82719 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83235 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83346 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83492 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83532 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83571 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83611 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83771 00:29:12.691 Removing: /var/run/dpdk/spdk_pid83936 00:29:12.691 Removing: /var/run/dpdk/spdk_pid84200 00:29:12.691 Removing: /var/run/dpdk/spdk_pid84323 00:29:12.691 Removing: /var/run/dpdk/spdk_pid84585 00:29:12.691 Removing: /var/run/dpdk/spdk_pid84697 00:29:12.691 Removing: /var/run/dpdk/spdk_pid84832 00:29:12.691 Removing: /var/run/dpdk/spdk_pid85227 00:29:12.691 Removing: /var/run/dpdk/spdk_pid85677 00:29:12.691 Removing: /var/run/dpdk/spdk_pid85678 00:29:12.691 Removing: /var/run/dpdk/spdk_pid85679 00:29:12.691 Removing: /var/run/dpdk/spdk_pid85956 00:29:12.691 Removing: /var/run/dpdk/spdk_pid86218 00:29:12.691 Removing: /var/run/dpdk/spdk_pid86647 00:29:12.691 Removing: /var/run/dpdk/spdk_pid87187 00:29:12.691 Removing: /var/run/dpdk/spdk_pid87190 00:29:12.691 Removing: /var/run/dpdk/spdk_pid87591 00:29:12.691 Removing: /var/run/dpdk/spdk_pid87609 00:29:12.691 Removing: /var/run/dpdk/spdk_pid87624 00:29:12.691 Removing: /var/run/dpdk/spdk_pid87657 00:29:12.691 Removing: /var/run/dpdk/spdk_pid87662 00:29:12.691 Removing: /var/run/dpdk/spdk_pid88052 00:29:12.691 Removing: /var/run/dpdk/spdk_pid88097 00:29:12.691 Removing: /var/run/dpdk/spdk_pid88472 00:29:12.691 Removing: /var/run/dpdk/spdk_pid88714 00:29:12.691 Removing: /var/run/dpdk/spdk_pid89293 00:29:12.691 Removing: /var/run/dpdk/spdk_pid90846 00:29:12.691 Removing: /var/run/dpdk/spdk_pid90848 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93146 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93221 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93298 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93385 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93521 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93606 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93683 00:29:12.691 Removing: /var/run/dpdk/spdk_pid93758 00:29:12.691 Removing: /var/run/dpdk/spdk_pid94126 00:29:12.691 Removing: /var/run/dpdk/spdk_pid94693 00:29:12.691 Removing: /var/run/dpdk/spdk_pid95189 00:29:12.950 Removing: /var/run/dpdk/spdk_pid95524 00:29:12.950 Removing: /var/run/dpdk/spdk_pid96280 00:29:12.950 Removing: /var/run/dpdk/spdk_pid97690 00:29:12.950 Removing: /var/run/dpdk/spdk_pid97895 00:29:12.950 Removing: /var/run/dpdk/spdk_pid98182 00:29:12.950 Removing: /var/run/dpdk/spdk_pid98700 00:29:12.950 Removing: /var/run/dpdk/spdk_pid99062 00:29:12.950 Clean 00:29:12.950 09:22:51 -- common/autotest_common.sh@1453 -- # return 0 00:29:12.950 09:22:51 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:12.950 09:22:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.950 09:22:51 -- common/autotest_common.sh@10 -- # set +x 00:29:12.950 09:22:51 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:12.950 09:22:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.950 09:22:51 -- common/autotest_common.sh@10 -- # set +x 00:29:12.950 09:22:51 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:12.950 09:22:51 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:12.950 09:22:51 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:12.950 09:22:51 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:12.950 09:22:51 -- spdk/autotest.sh@398 -- # hostname 00:29:12.950 09:22:51 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:13.209 geninfo: WARNING: invalid characters removed from testname! 00:29:45.337 09:23:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:46.714 09:23:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:49.991 09:23:28 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:53.269 09:23:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.624 09:23:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:59.214 09:23:38 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:30:02.499 09:23:41 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:02.499 09:23:41 -- spdk/autorun.sh@1 -- $ timing_finish 00:30:02.499 09:23:41 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:30:02.499 09:23:41 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:02.499 09:23:41 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:30:02.499 09:23:41 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:02.499 + [[ -n 5371 ]] 00:30:02.499 + sudo kill 5371 00:30:02.508 [Pipeline] } 00:30:02.527 [Pipeline] // timeout 00:30:02.533 [Pipeline] } 00:30:02.548 [Pipeline] // stage 00:30:02.553 [Pipeline] } 00:30:02.569 [Pipeline] // catchError 00:30:02.579 [Pipeline] stage 00:30:02.581 [Pipeline] { (Stop VM) 00:30:02.594 [Pipeline] sh 00:30:02.872 + vagrant halt 00:30:07.060 ==> default: Halting domain... 00:30:13.635 [Pipeline] sh 00:30:13.914 + vagrant destroy -f 00:30:18.102 ==> default: Removing domain... 00:30:18.114 [Pipeline] sh 00:30:18.395 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:30:18.404 [Pipeline] } 00:30:18.421 [Pipeline] // stage 00:30:18.428 [Pipeline] } 00:30:18.443 [Pipeline] // dir 00:30:18.449 [Pipeline] } 00:30:18.465 [Pipeline] // wrap 00:30:18.472 [Pipeline] } 00:30:18.486 [Pipeline] // catchError 00:30:18.496 [Pipeline] stage 00:30:18.499 [Pipeline] { (Epilogue) 00:30:18.512 [Pipeline] sh 00:30:18.795 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:25.371 [Pipeline] catchError 00:30:25.373 [Pipeline] { 00:30:25.385 [Pipeline] sh 00:30:25.665 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:25.665 Artifacts sizes are good 00:30:25.675 [Pipeline] } 00:30:25.692 [Pipeline] // catchError 00:30:25.704 [Pipeline] archiveArtifacts 00:30:25.712 Archiving artifacts 00:30:25.825 [Pipeline] cleanWs 00:30:25.837 [WS-CLEANUP] Deleting project workspace... 00:30:25.837 [WS-CLEANUP] Deferred wipeout is used... 00:30:25.843 [WS-CLEANUP] done 00:30:25.845 [Pipeline] } 00:30:25.860 [Pipeline] // stage 00:30:25.866 [Pipeline] } 00:30:25.880 [Pipeline] // node 00:30:25.887 [Pipeline] End of Pipeline 00:30:25.930 Finished: SUCCESS